Coastal wildlife camera assessment
Data inventory, performance evaluation, and biodiversity baseline for terrestrial mammals at Dangermond Preserve
Executive summary
This project evaluated wildlife camera data from the Dangermond Preserve to establish a foundation for biodiversity monitoring. The objectives were to inventory and assess existing image datasets, identify a compatible subset suitable for integrated analysis, and calculate baseline biodiversity metrics (species richness and occupancy) for terrestrial mammals. The resulting analyses provide an assessment of the current monitoring program and a repeatable framework for future efforts.
Data and methods
The study consolidated image data and metadata from The Nature Conservancy and its collaborators. A two-phase approach was used. Phase 1 involved a comprehensive data inventory, standardization to a common framework, and an assessment of data quality to identify a validated subset for analysis. In Phase 2, this subset was used to calculate camera performance (uptime) and biodiversity metrics (richness and occupancy). The analysis focused on 24 locations within the Coastal (traditional) and Networked (wireless) camera arrays during the Fall and Winter seasons from 2022 to 2025. This process yielded a standardized, analysis-ready data set and established the first quantitative baseline for the Preserve’s terrestrial mammal community.
Key findings
- Camera network performance improved over the study period, but operational inconsistencies in the Coastal array led to extended data gaps, limiting direct comparisons across years.
- The mammal community was dominated by widespread, adaptable species, including coyote, deer, and wild pig, which were detected consistently across most locations and time periods.
- Smaller mesocarnivores, such as gray fox and striped skunk, were rarely detected. These low detections may indicate low population densities or reflect monitoring limitations that require further investigation.
- Wildlife activity showed clear seasonal patterns. Puma occupancy increased in the Winter, whereas deer and wild pig occupancy was higher in the Fall, aligning with expected ecological cycles.
- Distinct spatial patterns in biodiversity were observed, including a species richness hotspot along the southern boundary and differences in species composition between the Coastal and Networked camera subsets.
Implications
These findings provide the first multi-year, quantitative baseline of terrestrial mammal diversity and monitoring performance at the Preserve. The results highlight the strengths of the monitoring program as well as critical limitations that must be addressed to ensure its long-term utility.
- Consistent camera uptime is the primary determinant of data reliability. Improving operational consistency, particularly within the Coastal array, is essential for future analyses.
- The monitoring program effectively detected large-bodied species, but low detection rates for smaller mammals were confounded by camera performance issues, such as overgrown vegetation. This highlights a critical need for consistent camera maintenance and vegetation management to ensure the full mammal community is monitored.
- The spatial and seasonal patterns identified in this study demonstrate the network’s potential to track ecological trends at a landscape scale, but underscore the importance of standardized, year-round data collection.
Recommendations and next steps
- Prioritize the validation of Spring and Summer image data to enable comprehensive, year-round biodiversity analyses.
- Implement operational thresholds for minimum camera activity to ensure future data are consistently suitable for analysis.
- Revise deployment and maintenance protocols to improve uptime and data consistency in the Coastal array.
- Adopt modifications to enhance the detection of small mammals, such as targeted vegetation management and adjustments to camera placement, given their ecological importance.
- Develop a streamlined set of visualizations or an interactive dashboard to support routine interpretation of monitoring results by Preserve managers.
Citation
Gray M. 2025. Coastal wildlife camera assessment: Data inventory, performance evaluation, and biodiversity baseline for terrestrial mammals at Dangermond Preserve. A technical report prepared by Pisaster for The Nature Conservancy. Berkeley, CA. 79p.
1 Introduction
1.1 Background
The Dangermond Preserve (hereafter, “the Preserve”) encompasses 99 km2 (24,459 acres) of diverse coastal habitat in Santa Barbara County, California. The convergence of cool California Current with warmer Southern California Bight waters fosters exceptional biodiversity within this unique landscape. The Preserve’s proximity to the coast creates a mosaic of habitats, including coastal bluffs, beaches, rocky intertidal zones, grasslands, oak woodlands, and wetlands. These diverse ecosystems provide essential habitat for a wide variety of terrestrial mammals, many of which are rare, threatened, or endangered.
Understanding the distribution, abundance, and activity patterns of these mammals is crucial for informing effective conservation strategies, assessing ecosystem health, and guiding future research. Wildlife cameras offer a non-invasive way to answer key questions about wildlife and are a valuable tool for long-term monitoring programs (Steenweg et al. 2017; Wearn and Glover-Kapfer 2019).
Wildlife camera monitoring context
Motion-activated wildlife cameras have improved our ability to study mammal species by capturing observations across multiple locations and extended periods (F. Rovero et al. 2013). Camera-based studies frequently address local or species-specific questions, with cameras placed opportunistically near features such as road crossing structures, trails, or water sources. While images from targeted studies provide local insights, non-standardized camera placement and variable deployment duration limit our ability to integrate data across locations for landscape-scale biodiversity analysis (Meek et al. 2014).
Accurate assessment of wildlife populations requires sampling designs that minimize bias and ensure the collected data represents wildlife activity across the entire study area (Hayek and Buzas 2010). Stratification and randomization are two key principles that achieve this goal. Stratification ensures cameras are placed across habitat types or environmental features, while randomization guards against placement bias (Quinn and Keough 2002). Data from 30-40 camera locations is recommended for long-term monitoring to provide comprehensive sampling effort for capturing biodiversity patterns across focal regions (Kays et al. 2020). However, this framework generates large data volumes requiring effective management to prevent analytical bottlenecks.
Standardized camera grids are now widely used to monitor wildlife communities, providing insights into long-term resilience for remote and understudied populations (Cove et al. 2013). Data from these networks identify patterns in spatial distribution (Wevers et al. 2021) and temporal dynamics (Fidino et al. 2022), and are used to study animal behavior like predator-prey interactions (Smith et al. 2020) and niche partitioning(Cervantes et al. 2023). Long-term deployment of stratified camera surveys establishes baseline data on species composition (Rooney et al. 2025) and enables monitoring of population responses to environmental change and disturbance (Burton et al. 2024).
Study area significance
The Dangermond Preserve offers a unique opportunity to understand coastal use by terrestrial mammals, an area where knowledge is currently limited. The Preserve borders the Pacific Coast with approximately 13 km (8 miles) of undisturbed coastline. The vision for a wild coast documented in the Preserve’s Integrated Resource Management Plan emphasizes this unique value and is supported by technological infrastructure needed to manage large amounts of wildlife camera data.
While a previous survey provided valuable baseline data on wildlife diversity across the Preserve (WRA 2017), it did not focus on coastal regions. More recently, TNC has used motion-detecting cameras to survey mammal use of the Pacific Coast. The Preserve has also hosted various projects by external collaborators, including studies documenting consistent foraging by coyotes (Zilz, Copeland, and Young 2023).
Image data and metadata for all wildlife camera projects at the Preserve are hosted on Animl, an open-source platform built by TNC to manage camera trap data. While Animl provides powerful tools for data integration and machine learning, the data housed within it had not yet been consolidated into a standardized monitoring workflow.
Establishing a standardized inventory for all image data collected at the Preserve was therefore a critical step to ensure data and associated metadata were complete and suitable for analysis.
1.2 Approach
The initial aim of this project was to use existing wildlife camera data to analyze the terrestrial mammal community at the Preserve. However, the specific analytical goals were contingent on a preliminary data assessment. This phased approach is a best practice in ecological analysis, ensuring that methods used are appropriate for the quality and structure of available data.
The work followed a deliberate, multi-step process using an iterative, nested approach beginning with comprehensive data inventory, followed by compatibility evaluation, and concluding with focused analyses on suitable data subsets. This approach progressed through two phases:
Phase 1: Initial inventory and assessment
The critical first step was to conduct a comprehensive inventory of all existing camera data. We evaluated its suitability for estimating biodiversity metrics by assessing data quality, metadata completeness, and study design consistency over time and space. This assessment revealed significant inconsistencies that would have compromised the validity of a broad-scale biodiversity analysis.
Initial inventory: Compiled all image data collected by The Nature Conservancy and collaborators at Dangermond Preserve, Hollister Ranch, and Vandenberg Space Force Base between October 22, 2013 and July 17, 2025.
Data subset selection: Identified images with appropriate spatial coverage and methodological compatibility for integrated analysis. This assessment retained 24 coastal locations with adequate uptime, prioritizing Fall and Winter seasons between September 2022 and February 2025, resulting in 6 occasions (3 Fall, 3 Winter) for analysis.
Data validation: Conducted human review of image labels.
Strategic recommendations: The assessment findings directly informed strategic recommendations providing actionable guidance for camera deployment, site maintenance, and data management. Implementing these practices will significantly enhance data quality and utility, enabling robust, long-term wildlife monitoring.
Phase 2: Focused analyses
Based on the assessment, we used the most complete and reliable data subset to conduct a focused analysis of camera performance and biodiversity.
- Camera performance and biodiversity analysis: Conducted a thorough analysis of camera performance to provide a quantitative understanding of the monitoring design function. Calculated species richness and estimated occupancy for the terrestrial mammal community to establish baseline biodiversity patterns for the validated data subset.
1.3 How this report is organized
The report reflects the project’s two main phases: data scoping and focal analyses. The scoping phase inventoried and assessed existing camera data to identify datasets suitable for analysis. The focal analysis phase used the validated data to assess camera performance and biodiversity metrics.
A camera data section introduces components common to both phases: the project setting, camera monitoring program, and image data collection and processing. This section also describes the data frameworks used throughout the project, including how camera activity status was defined (active, excluded, inactive, absent), steps used to create effort histories, and how sampling intervals were defined.
The scoping and focal analysis phases each have dedicated sections containing their respective methods and results. This organization differs from traditional reports that separate all methods from all results. Instead, methods and results are co-located because each phase represents a distinct analytical workflow.
Scoping phase: The scoping section begins with wildlife camera metadata requirements and core components necessary for robust analysis. This is followed by a description of the workflows used to inventory and standardize existing data, assess data compatibility to identify subsets suitable for integrated analysis, and filter and validate selected data for analytical use. Results include the data inventory, compatibility assessment outcomes, and coverage summaries that determined the locations and seasons used in the focal analyses. The section concludes with recommendations summarizing data identified as compatible and sufficient for analysis, along with workflow optimizations and site-specific adjustments that could improve monitoring efficiency.
Focal analysis phase: The remainder of the report covers the focal analyses of data prepared during the scoping phase. A methods section outlines the approach used to assess camera performance (activity patterns) and biodiversity metrics (species richness and occupancy). Performance and biodiversity results follow the methods description.
The report concludes with discussion of key findings and strategic recommendations to help TNC build a more effective wildlife monitoring program.
2 Camera data
2.1 Project location
The study was conducted at Dangermond Preserve in Santa Barbara County, California, USA (-120.49929, -120.3577, 34.4423, 34.57418). The 99 km2 (24,459 acres) Preserve borders the Pacific Coast, with approximately 13 km (8 miles) of undisturbed coastline forming much of the property perimeter. Adjacent properties include Hollister Ranch to the southeast and Vandenberg Space Force Base to the northwest. Preserve vegetation consists primarily of shrubland (42%), herbaceous areas (29%), and forest (26%). The coastal zone is dominated by shrubland (62%) and herbaceous vegetation (31%), with minimal oak woodland (2%) (The Nature Conservancy 2022).
As part of a commitment to long-term wildlife monitoring, The Nature Conservancy deployed motion-detecting cameras to survey mammalian wildlife within and surrounding the Preserve. The Preserve serves as a living laboratory, hosting wildlife camera deployments by external collaborators for various short- and long-term research projects. All resulting image files and metadata were stored on Animl, an open-source platform for managing camera trap data. Animl integrates camera trap data from diverse sources, uses customizable machine learning pipelines for automated wildlife detection and identification, and provides tools for image management, alerts, and data export.
Previous work designed a statistically robust camera network for monitoring terrestrial wildlife (Gray 2024), recommending a coastal transect for detailed Pacific Coast monitoring. The optimal design featured a linear transect of 20 cameras with 800-meter spacing, positioned within 200 meters of the coastline. This design incorporated 14 existing cameras (referred to as the Networked subset) and 6 additional cameras to complete coastal coverage. The design used stratified random sampling to ensure adequate spatial coverage while maintaining appropriate camera separation, dividing the transect into equal-length segments and randomly selecting locations within each segment.
Camera locations
Distribution of monitoring locations across the coastal transect (white rectangles) at Dangermond Preserve in Santa Barbara County, CA. Color indicates traditional or wireless camera type.
2.2 Image data
Image data collection
Surveys used two camera classes that differ in data storage and retrieval: traditional cameras and wireless cameras. Traditional cameras store images locally on internal memory cards, requiring routine site visits for data retrieval. Each site visit typically defines a “deployment” with specific start and end dates. Wireless cameras transmit image data to a central hub through radio networks or cellular connections, enabling real-time monitoring without routine site visits. Animl was initially designed for wireless cameras with continuous data transmission, defining deployments as ongoing datasets without predetermined end dates.
Traditional cameras required site visits approximately every 18 weeks (mean: 129 days, range: 22-827 days) for image retrieval and maintenance activities including camera setting verification, battery replacement, and detection zone alignment. Deployment durations were documented in a “Camera Maintenance” workbook that recorded site visit dates and maintenance activities, though camera outage documentation was incomplete. Wireless cameras were visited only when image transmission revealed operational problems, and lacked systematic service documentation.
Image data processing
Data processing in Animl involved three steps: (1) uploading image files to cloud storage, (2) automated detection and classification using MegaDetector, and (3) human validation of automated results.
MegaDetector evaluated each image for object presence, assigning detected objects a bounding box, classification label, and confidence score (see the Animl documentation for details). Images processed before June 2025 received one of four labels: animal, person, vehicle, or empty. A species-level classifier (SpeciesNet) was implemented for some cameras in June 2025, providing finer taxonomic identification up to binomial level (Genus species). SpeciesNet was still under evaluation during this study.
Following automated processing, trained personnel visually inspected images to confirm or correct species identifications and object counts. Images classified as animal required assignment of common names at the binomial level (e.g., coyote, bobcat). TNC staff completed initial validation, identifying all mammals >100g to species level when possible, while non-target species (small birds, reptiles) were identified to class level. Difficult identifications were labeled unknown and tagged as uncertain.
2.3 Derived data
The initial aim of this project was to use existing wildlife camera data to analyze the terrestrial mammal community at the Preserve. However, the specific analytical goals were contingent on a preliminary data assessment. The workflow and methods used for the data inventory and subset selection are described in the Scoping section.
The scoping assessment identified the data used as input to assess monitoring performance and biodiversity metrics. The specific methods used for this phase are described in the Methods section.
Survey effort
To account for equipment malfunctions and maintenance impacts, we created a daily effort history for each camera location. Effort was quantified as the number of 24-hour periods (camera days) when cameras operated according to standardized survey design. Periods of correct camera operation were classified as uptime.
We assigned a daily operational status for each location throughout the study period by analyzing image time stamps to identify start and end dates, supplemented by deployment metadata documenting non-functional periods. Daily camera status codes included:
active: Camera functional and collecting images according to survey design
excluded: Camera functional but images intentionally excluded from analysis due to survey design incompatibility (e.g., camera view shifted or blocked by vegetation)
inactive: Camera deployed but non-functional, collecting no images (e.g., due to camera damage, full memory card, dead batteries, corrupted data)
absent: Camera not deployed in the field
Sampling intervals
Wildlife camera data is continuous, so it must be aggregated into discrete sampling periods to calculate meaningful metrics like detection rates and occupancy. We established four hierarchical temporal scales to analyze wildlife activity patterns and temporal trends: overall and by year, season, and occasion. The specific duration and coverage of overall and annual periods differed between initial inventory assessment and focused analyses.
Overall: Overall sampling periods encompassed the entire project duration, providing comprehensive project-level summaries and reference points for evaluating finer-scale temporal patterns. Duration definitions varied by analysis stage: initial inventory and compatibility assessment spanned the complete data collection period, while focused analyses used September 2022-February 2025 (538 days), reflecting the period with robust, validated data.
Year: Annual sampling periods followed 12-month calendar years (January-December) to assess inter-annual variation in species detections and camera performance. These summaries track long-term trends after monitoring optimization. For the initial inventory, we used year-round data. The focused analyses used only Fall and Winter seasons. Since Winter seasons span calendar years, sample sizes in the focused analyses varied: 2022 included 4 months (September-December), 2025 included 2 months (January-February only).
Season: Seasonal periods were defined as 3-month intervals corresponding to North American seasons: Fall (September-November), Winter (December-February), Spring (March-May), and Summer (June-August). This framework assessed intra-annual differences in species activity and camera effectiveness, helping identify optimal monitoring periods. Seasonal summaries compared distinct seasons by pooling data from equivalent periods across years, though Winter seasons spanning calendar years prevented perfect alignment with annual summaries. Exploring seasonal variation helps identify times of year that yield the most informative data relative to effort. For instance, if summer consistently produces false triggers from moving vegetation, one could evaluate whether excluding summer data significantly impacts annual estimates. If not, restricting future monitoring to other seasons could increase efficiency, a strategy used by large-scale projects like Snapshot USA, which focuses on Fall surveys when many mammals are most active (Cove et al. 2021).
Occasion: Occasions represented sequential 3-month periods that enabled time-series analysis. While each occasion corresponds to a season, the analytical purpose differs. Seasonal summaries compare the four distinct seasons by pooling all data from a given season (e.g., all Falls). In contrast, occasion summaries preserve the chronological sequence of these 3-month blocks to visualize trends over time. For the initial inventory beginning August 26, 2021, the five days of Summer 2021 data were included with Fall 2021 to avoid an uncharacteristically short sampling period, making Fall 2021 span August 26-November 30, 2021. All subsequent occasions followed standard 3-month seasonal definitions.
3 Scoping
3.1 Required metadata
The importance of standardized metadata
Robust ecological analyses depend on well-managed and standardized data. This project’s data inventory and assessment were therefore founded on established metadata standards. Organizing the data required a detailed understanding of four distinct types of information: the overall project context, the specific camera hardware used, the details of each camera deployment, and the final image observations. This framework ensured that the raw media files collected at the Preserve could be transformed into a structured, analyzable dataset.
Without accurate records of how, where, and when data were collected, it becomes difficult to ensure data comparability, account for potential biases (e.g., differences in detection probability due to camera settings or placement), replicate analyses, or integrate data from multiple efforts. Adhering to established standards facilitates the efficient exploration, validation, and analysis of camera trap information, ultimately enabling more repeatable science and timely, data-driven management decisions (Young, Rode-Margono, and Amin 2018; Wearn and Glover-Kapfer 2017).
“Every researcher should pay attention to data hygiene. The underlying idea is: have ten well-documented elements, rather than a hundred poorly documented ones. Only in this way can we turn the cacophony of raw image data into useful quantitative data.” (Reyserhove, Norton, and Desmet 2023)
Several open-access resources provide protocols and templates to help standardize wildlife camera metadata. Key resources that informed our approach are listed below.
Wildlife Insights: Minimum Metadata Standards, including their data dictionary and metadata templates.
Global Biodiversity Information Facility (GBIF): Best Practices for Managing and Publishing Camera Trap Data.
Resources Information Standards Committee (RISC): Wildlife Camera Metadata Protocol.
WildCAM Network: Remote Camera Survey Guidelines & AB Metadata Standards.
Core metadata components
The project’s data inventory was organized around four core metadata components, which align with common standards like those developed by Wildlife Insights. These components represent the essential information required to prepare the data for analysis and correspond to the structured data tables developed during this project.
projects: High-level information defining the study’s scope, objectives, methodologies, and personnel.
cameras: An inventory of the specific camera units used, including their make, model, and unique identifiers.
deployments: Details for each instance a camera was active at a location, including coordinates, operational dates, and settings.
images: The processed observations from media files, including species identifications, counts, and the date and time of each observation.
By assessing the availability, completeness, and consistency of these four metadata components, we determined the readiness of the existing data for various biodiversity analyses.
projects metadata
The projects metadata serves as the highest-level container, defining the overall study objectives, sampling design, and personnel. This component provides the necessary context for anyone using the data to understand its scope and limitations. For analytical purposes, project-level details about objectives, focal species, survey design, and camera settings are critical for selecting comparable datasets. Documenting this information makes the working knowledge of the project team explicit, which is essential for long-term data usability and for publishing discoverable, well-described datasets.
cameras metadata
The cameras metadata component is a simple inventory of the physical hardware used in a project. Each camera device is recorded as a unique entry, with attributes that remain constant over time, such as its make, model, and serial number. The model consists of the manufacturer and model name (e.g. Reconyx-PC800). Understanding camera specifications (e.g., sensor resolution, detection zone, flash type) is crucial for interpreting detection data and accounting for potential differences in equipment performance.
Distinguishing cameras from locations
A camera is the physical device, whereas a location is the fixed place where a camera is installed, defined by geographic coordinates. In long-term monitoring, locations are static survey points that are monitored repeatedly over time. While a single camera may remain at a location for an extended period, the physical device is expected to be replaced periodically due to failure, damage, or theft. Therefore, it is essential to treat the camera inventory as distinct from the list of survey locations.
deployments metadata
A deployment represents a single, continuous period during which a specific camera collected comparable data at one location (also referred to as a survey or sampling point (Wearn and Glover-Kapfer 2017)). This component is the fundamental unit for calculating sampling effort (typically measured in trap-days), which is the basis for many biodiversity metrics like detection rates and occupancy. A deployment is considered to have ended any time an event occurred that could alter the probability of detecting wildlife. If the camera continued to collect data after such a change, a new deployment period was initiated.
The table below summarizes the types of events that would trigger the end of one deployment and the beginning of another.
| Attribute | Examples |
|---|---|
| Camera performance |
|
| Spatial placement of camera | Adjustment or change to:
|
| Camera settings | Adjustment or change to:
|
| Camera maintenance |
|
| Site maintenance |
|
The deployments metadata documents the specific location, functional time period, and settings for each survey. This contextual data serves as a central reference linking each image to its corresponding survey-specific attributes. Maintaining a detailed deployment inventory is a proactive way to ensure essential contextual information about each survey is well organized and linked to corresponding images.
Each deployment record contains three key types of information:
Location attributes: The geographic coordinates (latitude and longitude, WGS84 datum) where the camera was placed. It can be described with a name, identifier, or description, but we recommend always to record the geographical coordinates. Those are most commonly expressed as latitude and longitude in decimal degrees, determined using georeferencing best practices (Chapman and Wieczorek 2020).
Temporal attributes: The functional start and end dates of the camera’s operation for that deployment period. A deployment’s active period is defined by its functional start and end dates, which may differ from when the camera was physically placed or retrieved. For instance, if a camera’s view became obscured by vegetation, the deployment’s functional end date was recorded as the last day it collected comparable data, not the day it was visited for maintenance.
Camera attributes: The specific settings used during the deployment, such as trigger sensitivity, quiet period, detection distance, and camera height and tilt. These attributes are recorded at the deployment level because they could be changed between different deployments even when using the same physical camera. Camera attributes that are constant over time, like the make, model, and serial number, are recorded separately in the cameras metadata table.
images metadata
The images metadata contains the processed observations, or labels, from each media file. These labels identify the contents of an image, including animal species, count, sex, or age class. Labels are also used to classify non-target images, such as those containing only humans, vehicles, or no discernible subject (blanks). The aim is typically to record what animal species were observed, optionally including information on their group size, life stage, and behavior. Labels are also referred to as an observation, classification, annotation, or identification.
3.2 Workflow
The methods used to inventory, assess, and prepare the existing wildlife camera data from Preserve for analysis are detailed below. The workflow was designed to evaluate a large, heterogeneous collection of wildlife camera images and transform relevant records into a standardized, analysis-ready data set. This process involved three main stages:
- A data inventory to collate and standardize all available data.
- A data compatibility assessment to identify subsets suitable for integrated analysis.
- A final data preparation phase to filter and validate the selected data for analytical use.
Data inventory and standardization
The first stage involved creating a comprehensive, standardized record of all available images and their associated metadata. All data were hosted on Animl, an open-source platform for managing camera trap data. A primary challenge was that Animl was originally designed with a camera-centric data model suitable for short-term surveys where cameras are relocated. Because long-term monitoring is typically location-centric — with fixed monitoring locations and cameras replaced only as needed — a significant effort was required to restructure the data to document attributes for fixed locations over time. The curated data were structured to align with the four core components of the Wildlife Insights framework: projects, cameras, deployments, and images.
Figure 2. Flowchart showing the overall workflow used to create the four core metadata tables (projects, cameras, deployments, images)
A key outcome of this initial inventory was the grouping of related camera locations into discrete subsets for analysis. Because standardized metadata were often incomplete, we used data provenance — specifically the lead researcher or managing entity — as a proxy for methodological consistency. This approach assumed that a single entity applied a consistent protocol. Each resulting subset was assigned an informal name (e.g., Young, Coastal) to facilitate communication. These subsets became the fundamental units for subsequent assessment and analysis.
projects metadata
The projects metadata component was created to document the overall context for each data subset, including its scientific rationale, general methods, and personnel. A key task was to create a clear distinction between an analytical project (a data subset defined by a consistent survey design, such as the Coastal subset) and an Animl project (a broad organizational folder within the platform, such as Dangermond Preserve). For this report, “project” or “subset” refers to the former, analytical definition.
To define these analytical subsets, we began by collating all camera deployment records from the Dangermond Preserve and Hollister Ranch projects in the Animl database. The deployments were grouped based on the project manager and study objectives. Each resulting subset represented a collection of camera locations that shared consistent survey designs, camera settings, and scientific goals, making them suitable for combined analysis.
Figure 3. Flowchart showing the steps used to create the projects metadata
| Subset name | Description | Animl prefix |
|---|---|---|
| Coastal | Traditional Reconyx cameras managed by TNC at coastal locations within the Dangermond Preserve to monitor wildlife near the Pacific Coast (2022-present) | JLDP_coastal |
| Networked | Networked (local) Buckeye cameras installed by TNC at coastal locations within the Dangermond Preserve for live wildlife monitoring near the Pacific Coast (2022-present) | TNC_Buckeye |
| Coastal (UCSB) | Traditional Reconyx cameras managed by UCSB at coastal locations within the Dangermond Preserve to monitor wildlife near the Pacific Coast (2021-2022) | JLDP_coastal |
| Networked (UCSB) | Networked (local) Buckeye cameras installed by UCSB at dune locations within the Dangermond Preserve to collect time-series data about sand transport dynamics (2024-present) | UCSB_Buckeye |
| Young | Traditional Browning, Buckeye, Reconyx, and RidgeTec cameras installed by the Young Lab (UCSB) within Dangermond Preserve, Hollister Ranch, and Vandenberg Space Force Base (2018-present) | N/A |
| WRA | Traditional Reconyx cameras installed by TNC across the Dangermond Preserve to conduct a preserve-wide biodiversity assessment (2013-2014) | WRA |
| UCSC | Traditional RidgeTec cameras installed by UCSC at coastal locations within the Dangermond Preserve for intertidal monitoring near the Pacific Coast (2023-2024) | UCSC_Ridgetec |
| Cellular | Networked (cellular) RidgeTec cameras installed by TNC at coastal locations within the Dangermond Preserve for live wildlife monitoring near the Pacific Coast (2022-2023) | TNC_Ridgetec |
| Rosie | Traditional Browning, Moultrie, and Reconyx cameras installed by Rosie (UCSB) at coastal locations within the Dangermond Preserve to collect short-term, pre- and post-treatment data (2024) | UCSB_RM |
Subset descriptions
Informal names used to group comparable data sets in the Dangermond Preserve and Hollister Ranch Animl projects, with the corresponding Animl prefix and a brief description of the objectives and general design.
cameras metadata
The cameras table provided a complete inventory of the unique physical camera devices used and their attributes. Creating this table required an iterative process to resolve ambiguities in the source data. Initial assessment revealed that camera identifiers were not always unique (e.g., a location name was used as a camera_id, one serial number was attributed to cameras with different makes). To resolve this, a unique identifier for each physical camera was created by combining the recorded camera_id and the camera make, ensuring every hardware unit was uniquely cataloged. Each row represented one camera, with columns for the unique identifier, camera make, camera model, serial number, and purchase date.
Figure 4. Flowchart showing the steps used to create the cameras metadata
deployments metadata
The deployments table documented the location, functional time period, and settings for each camera survey. These attributes are essential for calculating effort (measured in trap-days) and all subsequent biodiversity metrics. A deployment was defined as the continuous time interval during which a specific camera collected comparable image data at one location. A deployment was considered to have ended if the camera was moved, malfunctioned, underwent substantial setting changes, or had its view obstructed in a way that compromised data quality (Reyserhove, Norton, and Desmet 2023).
We created an inventory of all unique camera locations by extracting deployment attributes from the image metadata and standardizing spatial coordinates (Figure 5). Records with missing or erroneous coordinates were identified and corrected in consultation with project staff.
Establishing accurate functional start and end dates for each deployment was time-intensive but essential process. The main challenge was that Animl treats a camera’s entire operational history at a location as a single, continuous deployment by default. This required us to retroactively define deployment periods based on camera performance. Additional challenges included the lack of centralized service logs and the absence of retained empty images needed to verify continuous camera function. Although traditional cameras had partial service records in a “Camera Maintenance” workbook, wireless cameras lacked any service documentation.
We therefore conducted a systematic manual review of the complete image history for each location to identify events that indicated breaks in data collection. This review process was supplemented by available service logs. Key indicators used to define deployment breaks included:
- Maintenance activities identified by personnel in images.
- Extended time gaps in image capture, indicating camera malfunction.
- Abrupt changes in the camera’s field of view.
- Obstructions (e.g., vegetation growth) causing excessive false-triggers.
This process ensured that each deployment record represented a period of consistent, functional data collection suitable for comparative analysis of survey effort and biodiversity metrics. Each row represented one deployment, with columns for the deployment identifier, start and end dates, functional dates, camera issues, and observations that may influence data analysis.
Figure 5. Flowchart showing the steps used to create the deployments metadata
images metadata
The images table contained the wildlife observations, linking each species detection to a specific deployment, date, and time. This table was populated using metadata from all images uploaded to the Animl platform, forming the complete data set that would undergo subsequent filtering and validation. Each row represented one image-label combination, with columns for EXIF-derived data (e.g., date, time) and the manually-validated labels and counts initially generated by the machine learning model. Separate rows were used to document multiple species within the same image.
Figure 6. Flowchart showing the steps used to create the images metadata
Compatibility assessment
Following the inventory, we assessed the analytical suitability of the curated data subsets. This step was needed to determine which subsets were sufficiently compatible to be integrated for robust comparative analysis, as combining data from disparate study designs can invalidate ecological inferences.
The Coastal and Networked subsets were selected as the priority subsets for this project. This decision was based on their direct alignment with the Preserve’s coastal monitoring goals and their inclusion of wireless cameras, a conservation technology intended to accelerate learning and improve management. These two subsets formed the analytical baseline against which all other subsets were compared.
We evaluated the remaining subsets against the Coastal and Networked subsets using four sequential criteria. A subset had to meet all four criteria to be included in the integrated analysis.
Study design: The subset needed to be designed for community-level monitoring of medium- to large-bodied mammals using unbaited, motion-triggered cameras, with the retention of all empty images.
Spatial distribution: The camera placement strategy (e.g., targeted vs. systematic) and geographic focus had to be relevant to the Preserve’s coastal transect and include a sufficient number of locations for robust statistical estimates.
Temporal coverage: The subset required operational overlap with the priority subsets. To estimate overlap, we calculated the total time elapsed between the first and last recorded image dates among all locations in each subset. This coarse approximation indicated the general period of operation for each subset but did not account for potential gaps where individual cameras may have been inactive.
Image data quality: All wildlife-containing images within the subset required validation by a human reviewer to ensure reliable species data.
Data coverage assessment
After identifying compatible data subsets, we performed a final filtering and validation step to create a high-quality data set for calculating performance and biodiversity metrics. Our goal was to ensure all data included in the final analysis met strict quality thresholds for minimum sampling effort and image validation completeness.
The workflow was hierarchical, beginning with an assessment of individual camera locations at 3-month intervals, followed by the selection of optimal seasons for analysis, and concluding with a targeted manual image validation effort.
We focused on 26 locations from the Coastal, Coastal (UCSB), and Networked subsets (September 1, 2021 to May 31, 2025), excluding all other subsets regardless of their location relative to Preserve boundaries.
The assessment was structured around two temporal concepts: occasions and seasons. An occasion refers to a specific 3-month interval (e.g., Fall 2022), preserving the chronological sequence needed for time-series analysis. A season refers to a collection of all data from a given season (e.g., all falls combined), used to assess intra-annual trends. The initial data quality assessment was conducted at the occasion level, as it represented the finest temporal scale in the analysis.
Location assessment
The first step was to evaluate the sampling effort for each camera location during each occasion. To be included, a location had to have at least 14 functional trap-days within that 3-month period. This threshold was established to ensure adequate sample sizes and prevent bias from cameras with brief or intermittent operation. A trap-day was considered functional only if the camera was confirmed to be operational with an unobstructed view.
Season prioritization
Next, we selected which seasons to prioritize for the intensive manual image validation required for the final analysis. This decision was based on two factors: the number of active locations per occasion and the estimated validation workload. We prioritized seasons that had a consistently high number of active locations over time, ensuring a robust spatial sample. We then estimated the number of unreviewed images for these seasons to gauge the feasibility of completing validation. The primary goal was to validate all images containing potential wildlife (object) and at least 10% of images classified by the machine learning model as empty. This 10% threshold for empty images is critical for accurately estimating species detection probabilities.
Image validation
The manual image validation was a targeted and time-intensive effort. The workflow was initially delayed because the database platform only allowed metadata export for validated images; access to the full metadata for unreviewed images became available in late May 2025. Once available, we proceeded with validation season by season, starting with fall, which had the lowest image volume. For images classified as empty, we reviewed 100% of them for any month where a location had fewer than 200 such images; for months with more than 200, we reviewed a random 10% subsample.
This validation was successfully completed for the Fall and Winter seasons. However, due to the substantially larger volume of images collected during the Spring and Summer, their validation could not be completed within the project’s scope. Consequently, Spring and Summer data were excluded from all final analyses. The resulting curated data set, representing consistently monitored locations during periods of complete data validation, formed the basis for calculating all performance metrics.
3.3 Assessment results
The data inventory identified 158 distinct locations across the Dangermond Preserve and Hollister Ranch Animl projects. Following the inventory, a compatibility assessment determined which data subsets were suitable for integrated analysis. The assessment confirmed the Coastal and Networked subsets were fully compatible. Portions of the Young subset were also deemed compatible, and a new subset, Coastal (UCSB), was created and retained for baseline comparison. All other subsets were excluded from the integrated analysis due to critical incompatibilities in study design, spatial focus, or data quality.
Camera locations
Wildlife camera locations in the Dangermond Preserve and Hollister Ranch projects in Animl (n = 111). Spatial coordinates were unavailable for 47 locations (not shown).
Location inventory
The 158 camera locations were distributed across eight primary subsets. The vast majority of locations (Dangermond Preserve: 132, Hollister Ranch: 24) were associated with a single Animl project, though two locations in the Young subset appeared in both (Agua Caliente and Auggies). Spatial coordinates were unavailable for 30% of locations (n = 47), which were retained in summary tables but excluded in maps. Although named for the property, the Dangermond Preserve Animl project also contained a number of locations outside the Preserve boundary, primarily within the extensive Young subset.
| Animl project name | Locations |
|---|---|
| Dangermond Preserve | 132 |
| Hollister Ranch | 24 |
| Both | 2 |
| Total | 158 |
Location inventory
Inventory of distinct locations in the Dangermond Preserve and Hollister Ranch Animl projects showing project name, subset name, location name, earliest image date, latest image date, and spatial coordinate.
Compatibility overview
The compatibility assessment systematically evaluated each subset against the Coastal and Networked subsets, which served as the analytical baseline due to their direct alignment with project goals. The final selection decisions are detailed below.
Included subsets
The following subsets were determined to be compatible and were carried forward for data validation and filtering:
Coastal and Networked: These two priority subsets met all four compatibility criteria, offering a consistent community-level monitoring design, complementary spatial coverage along the coastal transect, overlapping temporal coverage from 2022 onward, and complete, human-verified image validation.
Coastal (UCSB): This new subset was created to resolve a methodological inconsistency. We discovered that data from Coastal locations collected prior to May 2022 were managed differently (empty images were deleted). To maintain data integrity, these early records were separated into the Coastal (UCSB) subset. While the lack of empty images made it unsuitable for inclusion in the integrated time-series analysis, it was retained as a valuable baseline data set for historical comparison.
Excluded subsets
The following subsets were excluded from the integrated analysis due to one or more critical incompatibilities:
Young: This subset was the largest and most geographically extensive. It was excluded as a single analytical unit because it contained multiple, distinct monitoring objectives (e.g., community-level vs. coyote-specific) and extended far beyond the Preserve’s coastal focus. However, compatible locations within this subset were identified for potential integration on a case-by-case basis.
WRA: Excluded due to a lack of temporal overlap (2013–2014) and an incompatible preserve-wide randomized spatial design. Additionally, its images had not been manually validated.
Cellular: Excluded because its cellular transmission protocol had data throttling limitations, which could have restricted the number of images recorded and created a systematic bias in wildlife detections.
Networked (UCSB): Excluded due to an incompatible study design that mixed motion-triggered images with a high volume of time-lapse images. Its spatial focus was also further inland than the established coastal transect.
Rosie: This subset was incompatible due to its very short operational duration (approx. 2 weeks) and the complete absence of spatial coordinates for its 30 locations.
UCSC: While temporally compatible, this subset deployed cameras in pairs, providing insufficient spatial replication for robust analysis. Its empty image handling protocol was also unknown.
Detailed compatibility results
Study design
Compatible subsets required community-level monitoring of medium- to large-bodied mammals using unbaited, motion-triggered cameras with complete retention of empty images. We evaluated compatibility based on three criteria: target species, camera trigger method, and empty image handling protocols.
Subset comparison
Compatibility between each candidate subset and the priority subsets (Coastal, Networked) based on study design, spatial distribution, and temporal coverage. Color represents the outcome as compatible or incompatible; with a ? for lack of metadata and SOME for partial agreement.
Target species alignment: Most subsets focused on community-level monitoring of medium- to large-bodied mammals. However, a portion of the Young subset specifically targeted coyote monitoring. While other species were detected at these locations, the distinct research objective raised compatibility concerns. Given the Young subset’s substantial size, we flagged these coyote-focused locations for individual review rather than excluding the entire subset.
Camera trigger methods: Most subsets used motion-sensor detection exclusively. The Networked (UCSB) subset combined motion-triggered images with hourly time-lapse captures, creating substantially higher image volumes that complicated processing and validation workflows.
Empty image handling: The protocols for the Coastal subset changed over time. Early data (through May 2022) had empty images deleted, while later data retained all empty images. This shift reflected the transition from UCSB to TNC management in May 2022. We addressed this incompatibility by creating a new Coastal (UCSB) subset for the early data and redefining Coastal to include only post-May 2022 data under TNC management. Protocols for empty image handling remained unknown for UCSC and Rosie subsets. We applied a conservative exclusion approach for these subsets due to this uncertainty.
The Cellular subset was excluded due to data throttling limitations that potentially restricted image collection, creating systematic detection bias.
Spatial distribution
Spatial compatibility required camera placement within the Preserve’s coastal transect area and sufficient location numbers for robust statistical analysis.
Camera locations by subset
Wildlife camera locations by subset in the Dangermond Preserve and Hollister Ranch Animl projects. Spatial coordinates were unavailable for 47 locations (30 Rosie, 16 Young, and 1 WRA; not shown).
Geographic focus: Most subsets concentrated cameras along the Pacific Coast. The Networked (UCSB) subset placed cameras further inland, falling outside the coastal transect design. The Rosie subset could not be spatially evaluated because coordinates were unavailable for all 30 locations.
Sampling design compatibility: The WRA subset employed preserve-wide randomized sampling, which conflicted with the coastal transect approach of priority subsets. While valuable for preserve-wide monitoring, this design did not align with coastal-focused research objectives.
Spatial replication: Camera deployment density varied substantially. Three subsets (Cellular, Networked (UCSB), UCSC) deployed cameras in pairs, providing insufficient spatial replication for robust analysis. Remaining multi-location subsets averaged 32 locations (range: 12-65), with Young containing the most locations (n=65).
Boundary considerations: Several Young locations extended beyond Preserve boundaries: 4 locations at Jalama Beach, 11 locations spanning 12 km northwest into Vandenberg Space Force Base, and 9 coastal locations spanning 11 km southeast into Hollister Ranch.
Temporal coverage
Temporal compatibility required operational overlap during 2022-present period, as this was the time frame when priority subsets were active and providing data for comparative analysis.
Most subsets collected images after 2021. The WRA subset operated during 2013-2014, providing no temporal overlap with other subsets. The Rosie subset operated for approximately 2 weeks—insufficient duration for meaningful ecological inference and incompatible with the long-term monitoring approach of other subsets.
Image collection intervals
Range of image collection dates by subset and location. Lines represent the interval between the first and last image dates at each location; colors represent subsets in the Dangermond Preserve and Hollister Ranch Animl projects.
The Young subset spanned the longest period (2017-2025), likely encompassing multiple distinct studies with varying methodologies. Proper integration would require subdividing this subset by deployment phases, which exceeded current project scope.
| Subset name | Locations |
Image dates
|
Duration days
|
||||||
|---|---|---|---|---|---|---|---|---|---|
| From | To | Total | Mean | Min. | Max. | ||||
| Coastal | 12 | 2021-08-27 | 2025-05-20 | 1,363 | 1,024 | 19 | 1,362 | ||
| Networked | 14 | 2022-10-06 | 2025-07-17 | 1,016 | 670 | 84 | 1,016 | ||
| Networked (UCSB) | 2 | 2024-05-18 | 2025-07-18 | 427 | 376 | 325 | 427 | ||
| Young | 58 | 2017-01-02 | 2025-07-17 | 3,119 | 692 | 1 | 2,680 | ||
| WRA | 38 | 2013-10-22 | 2015-09-25 | 704 | 196 | 6 | 339 | ||
| UCSC | 2 | 2023-06-20 | 2024-03-04 | 259 | 230 | 202 | 259 | ||
| Cellular | 2 | 2022-10-06 | 2023-03-16 | 162 | 126 | 90 | 162 | ||
| Rosie | 30 | 2024-10-05 | 2024-12-20 | 77 | 27 | 11 | 36 | ||
| Total | 158 | 7,127 | 3,341 | 738 | 6,281 | ||||
| Mean | 20 | 891 | 418 | 92 | 785 | ||||
Subset dates
Location count and image date range by subset, ordered by inclusion priority. Location-level duration statistics show total, mean, minimum, and maximum days.
Image data quality
Image data quality was evaluated based on validation completeness—the proportion of images with human-verified species identifications. Compatible subsets required validation of all wildlife-containing images and at least 10% of empty images by trained reviewers.
We identified over 1.8 million images across all subsets. Image counts varied substantially; three subsets exceeded 250,000 images each: Young (977k), WRA (391k), and Networked (280k). The Cellular subset had the fewest images (8,175).
| Subset name | Locations |
Image count
|
||||
|---|---|---|---|---|---|---|
| Total | Mean | Min. | Max. | |||
| Coastal | 12 | 50,205 | 4,184 | 2 | 15,798 | |
| Networked | 14 | 280,430 | 20,031 | 701 | 79,293 | |
| Networked (UCSB) | 2 | 21,073 | 10,536 | 9,940 | 11,133 | |
| Young | 58 | 977,531 | 16,854 | 7 | 144,243 | |
| WRA | 38 | 391,886 | 10,313 | 190 | 44,516 | |
| UCSC | 2 | 20,721 | 10,360 | 9,141 | 11,580 | |
| Cellular | 2 | 8,175 | 4,088 | 900 | 7,275 | |
| Rosie | 30 | 83,141 | 2,771 | 152 | 39,203 | |
| Total | 158 | 1,833,162 | 79,137 | 21,033 | 353,041 | |
| Mean | 20 | 229,145 | 9,892 | 2,629 | 44,130 | |
Total raw image count by subset
Image count across the entire dataset before quality control filtering, ordered by inclusion priority. Location-level statistics for mean, minimum and maximum counts.
The WRA subset was processed solely by machine learning without human validation, making it incompatible with validated data sets. Several other subsets had incomplete validation requiring further evaluation before integration.
Image validation status
Image counts by validation status and subset, ordered by total image count. Shading indicates validated and unvalidated.
3.4 Coverage results
Based on the data inventory and the outcomes of the compatibility assessment, data filtering, and validation, we identified 24 locations (10 Coastal, 14 Networked) with adequate uptime, and prioritized Fall and Winter seasons between September 2022 and February 2025. This resulted in 6 occasions (3 Fall, 3 Winter) for subsequent analysis. The Coastal (UCSB) subset was excluded due to insufficient uptime consistency.
Coastal and Networked locations
Locations evaluated for minimum sampling effort and image validation completeness.
Location uptime
Data collection began on August 26, 2021, providing partial coverage for the final months of 2021 due to mid-year camera installation. We included data collected through May 31, 2025, spanning 15 occasions from Fall 2021 through Spring 2025.
The initial Coastal (UCSB) installation resulted in only 6 days of data collection during Summer 2021. Since this brief period would fail our 14-day inclusion criteria, we appended these August 26-31 data to the Fall 2021 occasion due to their proximity to the season boundary.
Location timeline
Image collection periods for Coastal and Networked locations, ordered by decreasing operational duration. Lines show intervals between first and last image dates; white circles indicate 3-month occasions with >14 days uptime.
Uptime by occasion
Camera trap-days by 3-month occasion for Coastal and Networked locations, ordered by total uptime. Dashed line indicates transition from Coastal (UCSB) to Coastal management in May 2022. Color intensity represents active trap-days (white: 0-14 days, viridis gradient: >14 days).
Uptime by month
Monthly camera trap-days for Coastal and Networked locations, ordered by total uptime. Dashed line indicates management transition in May 2022. Color intensity represents active trap-days (white: 0-14 days, viridis gradient: >14 days).
Coastal (UCSB) subset performance: The 12 Coastal (UCSB) locations showed uptime gaps over the three operational occasions, averaging 10 active cameras per occasion. Notable issues included complete inactivity at 5 locations during November 2021 and May 2022, and permanent discontinuation of two locations (Coastal 6: active only September 2021; Coastal 1: active only October 2021-January 2022).
Coastal subset performance: Although 12 locations were originally installed, only 10 were retained when data collection transitioned to TNC management, excluding the discontinued Coastal 1 and Coastal 6 sites. Activity showed strong fluctuations: only 5 locations were active during Summer 2022, activity increased through Fall-Winter 2022 but then declined sharply in Spring 2023. For the following year (April 2023–March 2024), monitoring was reduced to a single active location (Coastal 8). This location subsequently became inactive, resulting in a period of complete inactivity for the subset from April to August 2024.
Networked subset performance: Data collection began in October 2022 with 4 locations; three locations (Governments West, North Beach, Percos Beach) maintained consistent activity throughout the project duration. The fourth location (Percos Tracks) operated briefly (October-November 2022), remained inactive for nearly two years, then resumed service in August 2024. Eleven additional locations joined the Networked subset in late 2023 (September: 1, October: 8, November: 1), though two operated briefly before discontinuation (Little Cojo: October-December 2023; Damsite: August 2023-February 2024).
Season selection
Location activity varied by season. Among the 24 total locations simultaneous activity ranged from 4 locations (Summer 2023) to 19 locations (Winter 2024, Spring 2025). Fall and Winter seasons consistently maintained the highest location counts: Fall averaged 15 active locations (range: 12-18), Winter averaged 14 (range: 10-19). Summer had the lowest average (9 locations), primarily due to poor performance in 2022-2023 (5 and 4 locations, respectively).
| Year |
Season
|
||||
|---|---|---|---|---|---|
| Fall | Winter | Spring | Summer | ||
| 2022 | 12 | 10 | — | 5 | |
| 2023 | 14 | 14 | 6 | 4 | |
| 2024 | 18 | 19 | 12 | 18 | |
| 2025 | — | — | 19 | — | |
| Mean | 15 | 14 | 12 | 9 | |
| Minimum | 12 | 10 | 6 | 4 | |
Active locations by season
Count of locations with >14 days uptime by season and year for Coastal and Networked subsets combined.
The number of images collected varied by season. Fall and Winter occasions required the least image validation due to lower empty image volumes. Spring and Summer occasions generated substantially more empty images, with Summer requiring the most validation effort.
Wireless cameras collected a much larger number of images compared to traditional cameras. Before beginning human validation, we filtered these subsets to exclude images with camera malfunctions (e.g., shifted views, incorrect settings), resulting in a smaller dataset. The resulting Networked subset contained the most images (181,828 total) but had the lowest validation completion rates: 45% of object images and 5% of empty images had been reviewed. Our estimated validation requirement for the three subsets was approximately 28,338 images (20,156 object images, 8,182 empty images).
| Subset name | Images |
Object
|
Empty
|
||||||
|---|---|---|---|---|---|---|---|---|---|
| Total | Validated | To validate | Total | Validated | To validate | ||||
| Coastal | 20,529 | 7,357 | 56% | 3,276 | 13,172 | 18% | 0 | ||
| Coastal (UCSB) | 6,776 | 2,293 | 53% | 1,084 | 4,483 | 5% | 236 | ||
| Networked | 181,828 | 28,736 | 45% | 15,796 | 153,125 | 5% | 7,946 | ||
| Total | 209,133 | 38,386 | — | 20,156 | 170,780 | — | 8,182 | ||
Pre-validation image inventory
Initial counts of object and empty images for the Coastal, Coastal (UCSB), and Networked subsets after filtering for camera malfunctions.
Image validation
Manual review of 57,144 images was completed between April-July 2025, including 19,763 object images and 37,381 empty images.
All object images achieved complete validation for the Coastal and Coastal (UCSB) subsets. Most object images in the Networked subset received validation, with unreviewed images concentrated in Summer occasions (previously designated as lowest priority for analysis).
Validation status before and after review
Distribution of validated and unvalidated images by subset before (above) and after (below) systematic validation effort. Color represents image type for object and empty images; shading indicates status as validated (dark) and unvalidated (light). Note: Unequal bar length before and after validation due to ongoing image collection.
3.5 Outcomes and recommendations
Scoping outcomes
The scoping phase was a foundational step that transformed a large, heterogeneous collection of camera data into a standardized, analysis-ready data set. The process systematically defined the final scope of the biodiversity and performance analyses presented in this report. The key outcomes are summarized below.
Data inventory and standardization: The initial inventory consolidated all available data from disparate sources into a single, standardized, location-centric database. A primary achievement was restructuring the existing data to align with the Wildlife Insights framework, which involved a significant manual effort to define unique camera locations, resolve hardware ambiguities, and retroactively establish functional deployment periods based on a thorough review of image histories and service logs. This created the robust data foundation necessary for all subsequent assessments.
Identification of compatible data subsets: The compatibility assessment evaluated eight distinct data subsets against four criteria: study design, spatial distribution, temporal coverage, and data quality. This process confirmed that the Coastal and Networked subsets were suitable for an integrated analysis. It also resulted in the creation of a new Coastal (UCSB) subset to isolate historical data with a different methodology (deleted empty images), thereby preserving the integrity of the primary time-series analysis while retaining the data for baseline comparisons. All other subsets were excluded due to critical incompatibilities.
Definition of the final analytical scope: The coverage assessment finalized the data set by identifying periods with sufficient camera uptime and complete image validation. This assessment determined that only the Fall and Winter seasons between September 2022 and February 2025 met the necessary quality thresholds for robust analysis. Due to insufficient camera uptime and the large, unvalidated volume of images from the Spring and Summer, data from these seasons were excluded from the final analysis. This outcome defined the final data set used for all subsequent performance and biodiversity metrics, which included 24 locations (10 Coastal, 14 Networked) across 6 temporal occasions.
Analytical next steps
The foundational work completed in this project has produced a robust data set that serves as the basis for the analyses within this report and provides a clear path for future work.
Proceed with integrated analysis (complete). This report used the validated data set to conduct the recommended performance and biodiversity analysis (e.g., camera activity, species richness, occupancy). This provides the first multi-year insights into the coastal mammal community based on a consistent data subset.
Prioritize Spring and Summer validation. The most critical next step is to complete the manual image validation for the Spring and Summer seasons for the 24 validated locations, prioritizing the interval after September 2022 to align with the initial validated data. Finishing this task would enable a full four-season comparison, offering a more complete picture of annual wildlife activity patterns and seasonal habitat use.
Recommendations for future monitoring
To streamline future analyses and increase the scientific value of all data collected, we recommend strategic improvements to the data collection workflow and tactical adjustments at specific camera locations.
Workflow and protocol enhancements
The following enhancements are critical for ensuring that all future data are analysis-ready from the point of collection.
Establish and document functional dates for all deployments. The single most important improvement is to systematically record the exact dates a camera is operating as intended.
Action: During every site visit, technicians should log the camera’s status. If a camera’s view is obstructed or it is malfunctioning, the start and end dates of this non-functional period must be recorded.
Why it matters: Without accurate functional dates, it’s impossible to distinguish a true absence of wildlife from a camera failure. This uncertainty undermines the reliability of nearly all ecological analyses.
Standardize site visit protocols. To create an unambiguous link between field activities and the image data, we recommend a consistent protocol for every camera check.
Action: At every visit, take a “start-of-survey” photo of a whiteboard showing the location name, date, and time. Use a standardized tag (e.g., “site visit”) for these images in the database.
Why it matters: This creates a permanent, easily searchable record of the entire deployment history directly within the image data.
Integrate project metadata within Animl. Assessing the compatibility of different camera projects was challenging without a centralized source of high-level information.
Action: Document the study design, objectives, and camera placement strategy for each distinct project directly within the Animl platform or in a linked data management plan.
Why it matters: This practice will make it significantly easier to determine which data sets can be integrated for future landscape-scale analyses.
Implement targeted QC for nighttime images. The validation process revealed that the machine learning model occasionally misclassifies animals in nighttime infrared images as “empty.”
- Action: Implement a routine quality control step to manually review a subsample of nighttime images classified as empty, especially at locations with known lower model performance (e.g., Percos Beach, North Vista Springs Bluff).
Site-specific adjustments
The performance analysis identified several locations where the following adjustments could significantly improve data quality.
Coastal 2: Reposition the camera to focus more on the terrestrial habitat and less on the ocean and sky.
Percos Culvert: Lower the camera to knee height and aim it through the culvert to better target terrestrial mammals.
North Vista Springs: Move the camera closer to the target trail or feature of interest.
North Vista Springs Bluff: Adjust the camera angle to minimize the amount of ocean in the view to reduce false triggers from wave movement.
Camera opposite Governments Beach: Replace the white flash unit with a standard infrared flash camera to ensure observations reflect natural animal behavior.
4 Methods
The analysis was restricted to images collected at 24 cameras deployed specifically for coastal monitoring. This data set was selected during the initial scoping assessment based on its methodological consistency and spatial relevance to the project’s objectives. The data used for this performance assessment were collected between September 2022 and May 2025, but were further filtered to include only the Fall and Winter seasons from September 2022 through February 2025, as these were the periods with complete and reliable image validation.
4.1 Camera performance
Camera performance was assessed to quantify sampling intensity and operational efficiency across the camera network. The analysis focused on camera activity, which was summarized across four temporal scales: overall, annually, seasonally, and by 3-month occasion. This allowed for the identification of performance trends and potential biases in the data set.
A daily effort history was created for each camera location, with every 24-hour period assigned one of four statuses: active, excluded, inactive, or absent (see the Camera data section). A day was considered active if the camera was confirmed to be functional and collecting data. The total number of active days was defined as a camera’s uptime. To allow for direct comparison between locations and time periods with different deployment lengths, we calculated uptime as a percentage of the total days in each summary period. This standardized metric was used to identify and diagnose underperforming camera locations.
4.2 Biodiversity metrics
We calculated two primary biodiversity metrics: species richness and naïve occupancy. These analyses were based on a filtered subset of the image data to ensure comparability across time and space.
Data preparation for biodiversity analysis
To create the analytical data set, the complete image database was filtered to meet specific criteria. The data set included only:
- Records of terrestrial, wild mammal species.
- Images from the Coastal and Networked location subsets.
- Data collected during the Fall (September–November) and Winter (December–February) seasons.
- Observations recorded between September 1, 2020, and February 28, 2025.
Images with confirmed date or camera view errors were excluded from the final data set.
Detection history preparation
A species-level detection history was generated as the foundational data for all biodiversity metrics. This process converted the sequence of image records into a standardized format representing the presence or absence of each species at each location over time.
First, we identified independent detection events for each species at each location. Successive images of the same species were grouped into a single detection event if they occurred within a pre-defined 30-minute interval of one another. Each independent detection event was then coded as a detection (1). A detection history could have between 0 - 48 detections for a location per day (30-minute independence).
Next, these events were used to determine the detection status for larger sampling periods. For a given sampling period (e.g., a season), a species was recorded as detected (1) at a location if at least one independent detection event occurred. If no events were recorded, the species was considered not detected (0) for that location and period.
Species richness
Species richness was defined as the total count of unique mammal species detected (e.g., (Tobler et al. 2008; Swanson et al. 2015)). This metric provided a direct measure of the variety of species observed across the study area and over time. Richness was calculated for the combined camera network and for individual camera subsets, and was summarized by location and across the four temporal scales (overall, year, season, and occasion).
Addressing sampling effort
Because species richness can be positively correlated with sampling effort, we implemented several steps to ensure that comparisons were valid and not artifacts of varying camera uptime. During the initial scoping phase, we reviewed camera operational dates to identify seasons with a sufficient number of locations that were consistently active for more than 14 days per month. The data were then filtered to include only these reliable locations and time periods. Finally, we calculated camera uptime using the same spatial and temporal structure as the richness analysis, which enabled a direct comparison between observed trends in richness and sampling effort.
Species occupancy
Occupancy is a measure of where animals spend time (e.g., (Francesco Rovero et al. 2014)). We estimated naïve occupancy to understand species distribution patterns. Naïve occupancy was defined as the proportion of active camera locations where a species was detected during a given sampling period (Wintle et al. 2004; MacKenzie and Royle 2005).
The primary sampling period was defined as a 3-month season (Fall, Winter). This duration was selected to align with wildlife community dynamics and to satisfy the “closure” assumption, which requires that a species’ presence or absence at a location remains constant throughout the sampling interval. Occupancy was estimated for each species and time interval by dividing the number of locations with at least one detection by the total number of active locations.
Assumptions and limitations
This approach assumes perfect detection, meaning a species is always detected if it is present. This can lead to false absences—instances where a species uses a location but is not detected—and may result in an underestimation of true occupancy. The resulting metric is an estimate of occupancy for the entire collection of surveyed locations rather than for individual sites. Despite these limitations, monitoring naïve occupancy can be an effective method for detecting changes in species distribution over time (Ewing, Doll, and Ewing 2024). An alternative, formal occupancy modeling, can account for imperfect detection but was beyond the scope of this project.
5 Results
Summary
Camera performance
24
Locations
Sep 01, 2022
First survey date
May 31, 2025
Last survey date
Fall and Winter
Seasons
538
Sampling days
Images
88,647
Total images
13,128
Wildlife images
3,693
Images per location
(average)
Identified species
9
# species
Coyote, Deer, Wild pig, Bobcat, Raccoon
Most common species
Puma, Gray fox
Rare species (<10 detections)
Independent detections
5.1 Camera performance
Monitoring performance was evaluated for each location in the Coastal and Networked subsets using the validated data from the Fall and Winter seasons between September 2022 and February 2025. The assessment included summaries of individual location performance as well as trends across annual, seasonal, and 3-month occasion time scales.
Although the total study period spanned September 2022 to February 2025, the analysis included only Fall (September-November) and Winter (December-February) seasons, for a total of 538 days. The annual metric combined these six calendar months. Because Winter seasons cross calendar years, the number of months per year varied: 2022 included four months (September-December) while 2025 included only two (January-February). These differences in sampling effort should be considered when interpreting annual estimates, particularly for 2025.
Camera activity by location
Uptime performance varied spatially, with consistently lower performance at southeastern Preserve locations: Little Cojo, Damsite, Coastal 2, and Coastal 12. Some adjacent location pairs, like those at Black Canyon or North Vista Spring Canyon, showed comparable uptime. However, proximity did not always correspond with similar performance; North Beach achieved 94% uptime while nearby Coastal 7 managed only 44%.
Location uptime performance
Percentage of active monitoring time by location during Fall and Winter seasons, September 2022-February 2025. Circle size and color intensity increase with uptime percentage; bold outline indicates highest-performing location. Light gray polygons show coastal transect cells.
The 24 cameras were deployed for a total of 9,709 days during the analysis period, achieving an average installation coverage of 74%. The Coastal locations, which were established earlier, had higher installation coverage (86%) than the newer Networked locations (66%). The installation coverage metrics were affected by the addition of ten Networked locations in Fall 2023 and the early discontinuation of three other locations (Coastal 9, Damsite, and Little Cojo).
Despite having lower installation coverage, the Networked locations achieved a higher average uptime (61%) compared to the Coastal locations (47%). Performance across all sites ranged from a high of 94% to a low of 15%. Most Coastal locations performed below 45% uptime, while the majority of Networked locations exceeded this threshold.
Inactivity was a primary factor in performance differences, particularly for the Coastal locations, which were inactive for an average of 33% of their deployed days. In contrast, only three Networked locations had inactive periods: Percos Tracks (50%), Governments Beach (12%), and North Vista Spring Canyon (1%).
Data exclusion rates were low for both subsets (Coastal: 5%, Networked: 1%). Two cameras in each subset were responsible for most of the exclusions: Coastal 7 (50% excluded) and Coastal 12 (3% excluded); Black Canyon (12% excluded) and North Vista Springs Canyon (1% excluded).
| Location |
Installed
|
Active
|
Excluded
|
Inactive
|
||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| % | Days | % | Days | % | Days | % | Days | |||||
| Coastal | ||||||||||||
| Coastal 8 | 100% | 544 | 100% | 544 | 0% | 0 | 0% | 0 | ||||
| Coastal 3 | 100% | 544 | 67% | 362 | 0% | 0 | 33% | 182 | ||||
| Coastal 5 | 100% | 544 | 67% | 362 | 0% | 0 | 33% | 182 | ||||
| Coastal 12 | 100% | 544 | 45% | 246 | 3% | 15 | 52% | 283 | ||||
| Coastal 7 | 100% | 544 | 44% | 242 | 50% | 271 | 6% | 31 | ||||
| Coastal 2 | 100% | 544 | 33% | 181 | 0% | 0 | 67% | 363 | ||||
| Coastal 9 | 33% | 181 | 33% | 181 | 0% | 0 | 0% | 0 | ||||
| Coastal 4 | 100% | 544 | 30% | 161 | 0% | 0 | 70% | 383 | ||||
| Coastal 10 | 100% | 544 | 28% | 153 | 0% | 0 | 72% | 391 | ||||
| Coastal 11 | 24% | 130 | 24% | 130 | 0% | 0 | 0% | 0 | ||||
| Total | — | 4,663 | — | 2,562 | — | 286 | — | 1,815 | ||||
| Mean | 86% | 466 | 47% | 256 | 5% | 29 | 33% | 182 | ||||
| Networked | ||||||||||||
| Governments West | 94% | 509 | 94% | 509 | 0% | 0 | 0% | 0 | ||||
| North Beach | 94% | 509 | 94% | 509 | 0% | 0 | 0% | 0 | ||||
| Percos Beach | 94% | 509 | 94% | 509 | 0% | 0 | 0% | 0 | ||||
| Black Canyon Bluff | 67% | 363 | 67% | 363 | 0% | 0 | 0% | 0 | ||||
| East Percos | 67% | 363 | 67% | 363 | 0% | 0 | 0% | 0 | ||||
| N Vista Spring Bluff | 67% | 363 | 67% | 363 | 0% | 0 | 0% | 0 | ||||
| Percos Culvert | 67% | 363 | 67% | 363 | 0% | 0 | 0% | 0 | ||||
| N Vista Spring Canyon | 67% | 363 | 65% | 355 | 1% | 4 | 1% | 4 | ||||
| Black Canyon | 67% | 363 | 55% | 299 | 12% | 64 | 0% | 0 | ||||
| Governments Beach | 67% | 363 | 54% | 296 | 0% | 0 | 12% | 67 | ||||
| Percos Tracks | 94% | 509 | 43% | 235 | 0% | 0 | 50% | 274 | ||||
| Point C North | 41% | 224 | 41% | 224 | 0% | 0 | 0% | 0 | ||||
| Damsite | 30% | 161 | 30% | 161 | 0% | 0 | 0% | 0 | ||||
| Little Cojo | 15% | 84 | 15% | 84 | 0% | 0 | 0% | 0 | ||||
| Total | — | 5,046 | — | 4,633 | — | 68 | — | 345 | ||||
| Mean | 66% | 360 | 61% | 331 | 1% | 5 | 5% | 25 | ||||
| Total | — | 9,709 | — | 7,195 | — | 354 | — | 2,160 | ||||
| Mean | 74% | 405 | 55% | 300 | 3% | 15 | 17% | 90 | ||||
Performance summary by location
Installation duration and daily activity status at each location, ordered by overall uptime. Percentages calculated across all Fall and Winter seasons (September 2022 - February 2025).
Camera activity by year
Across all 24 locations, the average annual uptime was 55%. An overall trend of improvement was observed, with average uptime increasing from 40% in 2022 to 77% in 2025.
Annual uptime trends
Average uptime by year for combined subsets and individually. Large circles show overall means; small open circles represent individual location means. Years reflect aggregated data from Fall and Winter seasons only.
The two subsets showed different annual patterns. The Networked locations exhibited steady improvement, starting at 26% uptime in 2022 and stabilizing at a high level of performance (65% in 2023 and 79% in 2025). In contrast, the Coastal locations showed a V-shaped trend, with a high uptime in 2022, a decline to 24% in 2023, and a subsequent recovery in 2025. This decline was driven by a substantial increase in inactive days during 2023.
| Year |
Installed
|
Active
|
Excluded
|
Inactive
|
|||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| % | Days | Locations | % | Days | % | Days | % | Days | |||||
| 2022 | 54% | 1,568 | 12 | 40% | 1,158 | 3% | 92 | 11% | 318 | ||||
| 2023 | 79% | 3,421 | 20 | 48% | 2,091 | 3% | 142 | 27% | 1,188 | ||||
| 2024 | 82% | 3,599 | 21 | 65% | 2,853 | 3% | 120 | 14% | 626 | ||||
| 2025 | 79% | 1,121 | 19 | 77% | 1,093 | 0% | 0 | 2% | 28 | ||||
| Total | — | 9,709 | 72 | — | 7,195 | — | 354 | — | 2,160 | ||||
| Mean | 73% | 2,427 | 18 | 58% | 1,799 | 2% | 88 | 14% | 540 | ||||
Annual activity overview
Days and percentage by activity status each year across all monitoring locations. Years reflect aggregated data from Fall and Winter seasons only.
| Year |
Installed
|
Active
|
Excluded
|
Inactive
|
|||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| % | Days | Locations | % | Days | % | Days | % | Days | |||||
| coastal | |||||||||||||
| 2022 | 100% | 1,220 | 8 | 69% | 843 | 8% | 92 | 23% | 285 | ||||
| 2023 | 84% | 1,515 | 7 | 24% | 441 | 7% | 134 | 52% | 940 | ||||
| 2024 | 80% | 1,456 | 8 | 46% | 834 | 3% | 60 | 31% | 562 | ||||
| 2025 | 80% | 472 | 8 | 75% | 444 | 0% | 0 | 5% | 28 | ||||
| Total | — | 4,663 | 31 | — | 2,562 | — | 286 | — | 1,815 | ||||
| Mean | 86% | 1,166 | 8 | 54% | 640 | 5% | 72 | 28% | 454 | ||||
| networked | |||||||||||||
| 2022 | 29% | 348 | 4 | 26% | 315 | 0% | 0 | 3% | 33 | ||||
| 2023 | 75% | 1,906 | 13 | 65% | 1,650 | 0% | 8 | 10% | 248 | ||||
| 2024 | 84% | 2,143 | 13 | 79% | 2,019 | 2% | 60 | 3% | 64 | ||||
| 2025 | 79% | 649 | 11 | 79% | 649 | 0% | 0 | 0% | 0 | ||||
| Total | — | 5,046 | 41 | — | 4,633 | — | 68 | — | 345 | ||||
| Mean | 67% | 1,262 | 10 | 62% | 1,158 | 1% | 17 | 4% | 86 | ||||
| Total | — | 9,709 | 72 | — | 7,195 | — | 354 | — | 2,160 | ||||
| Mean | 76% | 1,214 | 9 | 58% | 899 | 3% | 44 | 16% | 270 | ||||
Annual activity by subset
Days and percentage by activity status each year, separated by Coastal and Networked subsets. Years reflect aggregated data from Fall and Winter seasons only.
Coastal locations had higher exclusion and inactivity rates than Networked locations across the study period. Inactivity rates for Coastal locations (28%) were much higher than for Networked locations (4%). Both subsets experienced peak inactivity in 2023.
Annual activity comparison
Days each year when cameras were active, excluded, inactive, or absent across all monitoring locations. Years reflect aggregated data from Fall and Winter seasons only.
Camera activity by season
Overall uptime was nearly identical between Fall (56%) and Winter (55%) seasons. The Coastal locations performed consistently in both seasons (47% uptime each), while the Networked locations performed slightly better in the Fall (70%) than in the Winter (60%). This seasonal difference in the Networked subset aligned with its higher installation coverage in Fall (75%) versus Winter (67%). The Coastal subset had consistently high inactivity (33%) across both seasons.
Seasonal uptime trends
Average uptime for Fall and Winter seasons. Large circles show overall means; small open circles represent individual location means (2022-2025).
| Season |
Installed
|
Active
|
Excluded
|
Inactive
|
|||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| % | Days | Locations | % | Days | % | Days | % | Days | |||||
| Fall | 74% | 4,872 | 24 | 56% | 3,640 | 2% | 152 | 16% | 1,080 | ||||
| Winter | 74% | 4,837 | 24 | 55% | 3,555 | 3% | 202 | 17% | 1,080 | ||||
| Total | — | 9,709 | 48 | — | 7,195 | — | 354 | — | 2,160 | ||||
| Mean | 74% | 4,854 | 24 | 55% | 3,598 | 3% | 177 | 17% | 1,080 | ||||
Seasonal activity overview
Days and percentage by activity status for Fall and Winter seasons across all locations (24 locations, 2022-2025)
| Season |
Installed
|
Active
|
Excluded
|
Inactive
|
|||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| % | Days | Locations | % | Days | % | Days | % | Days | |||||
| coastal | |||||||||||||
| Fall | 87% | 2,366 | 10 | 47% | 1,294 | 6% | 152 | 34% | 920 | ||||
| Winter | 85% | 2,297 | 10 | 47% | 1,268 | 5% | 134 | 33% | 895 | ||||
| Total | — | 4,663 | 20 | — | 2,562 | — | 286 | — | 1,815 | ||||
| Mean | 86% | 2,332 | 10 | 47% | 1,281 | 5% | 143 | 33% | 908 | ||||
| networked | |||||||||||||
| Fall | 75% | 2,506 | 14 | 70% | 2,346 | 0% | 0 | 5% | 160 | ||||
| Winter | 67% | 2,540 | 14 | 60% | 2,287 | 2% | 68 | 5% | 185 | ||||
| Total | — | 5,046 | 28 | — | 4,633 | — | 68 | — | 345 | ||||
| Mean | 71% | 2,523 | 14 | 65% | 2,316 | 1% | 34 | 5% | 172 | ||||
| Total | — | 9,709 | 48 | — | 7,195 | — | 354 | — | 2,160 | ||||
| Mean | 78% | 2,427 | 12 | 56% | 1,799 | 3% | 88 | 19% | 540 | ||||
Seasonal activity by subset
Days and percentage by activity status for Fall and Winter seasons, separated by subset (2022-2025)
Camera activity by occasion
The analysis of 3-month occasions showed that maximum uptime was achieved during Winter 2024 (77%), while the lowest occurred in Winter 2022 (36%).
The occasion-level data illustrated the annual trends in greater detail. The Networked subset’s performance closely tracked its installation coverage. The Coastal subset experienced substantial inactivity during every occasion, with two occasions (Winter 2022 and Fall 2023) dropping below 30% uptime.
Uptime by occasion
Average uptime by 3-month occasion for combined and individual subsets for Fall and Winter seasons. Large circles show overall means; small open circles show individual location means; solid lines connect sequential occasions; dashed lines indicate a 6-month gap between occasions (2022-2025).
| season | year |
Installed
|
Active
|
Excluded
|
Inactive
|
||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| % | Days | Locations | % | Days | % | Days | % | Days | |||||||
| 1 | Fall | 2022 | 52% | 1,134 | 12 | 40% | 879 | 3% | 61 | 9% | 194 | ||||
| 2 | Winter | 2022 | 56% | 1,209 | 10 | 36% | 775 | 3% | 74 | 17% | 360 | ||||
| 3 | Fall | 2023 | 90% | 1,967 | 14 | 54% | 1,172 | 4% | 91 | 32% | 704 | ||||
| 4 | Winter | 2023 | 88% | 1,918 | 14 | 51% | 1,118 | 6% | 128 | 31% | 672 | ||||
| 5 | Fall | 2024 | 81% | 1,771 | 18 | 73% | 1,589 | 0% | 0 | 8% | 182 | ||||
| 6 | Winter | 2024 | 79% | 1,710 | 19 | 77% | 1,662 | 0% | 0 | 2% | 48 | ||||
| Total | — | 9,709 | 87 | — | 7,195 | — | 354 | — | 2,160 | ||||||
| Mean | 74% | 1,618 | 14 | 55% | 1,199 | 3% | 59 | 17% | 360 | ||||||
Occasion activity overview
Days and percentage by activity status for each 3-month occasion across all locations (24 cameras, 6 occasions, 2022-2025).
| season | year |
Installed
|
Active
|
Excluded
|
Inactive
|
||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| % | Days | Locations | % | Days | % | Days | % | Days | |||||||
| coastal | |||||||||||||||
| 1 | Fall | 2022 | 100% | 910 | 8 | 72% | 657 | 7% | 61 | 21% | 192 | ||||
| 2 | Winter | 2022 | 94% | 849 | 7 | 56% | 505 | 8% | 74 | 30% | 270 | ||||
| 3 | Fall | 2023 | 80% | 728 | 1 | 10% | 91 | 10% | 91 | 60% | 546 | ||||
| 4 | Winter | 2023 | 80% | 728 | 1 | 10% | 91 | 7% | 60 | 63% | 577 | ||||
| 5 | Fall | 2024 | 80% | 728 | 6 | 60% | 546 | 0% | 0 | 20% | 182 | ||||
| 6 | Winter | 2024 | 80% | 720 | 8 | 75% | 672 | 0% | 0 | 5% | 48 | ||||
| Total | — | 4,663 | 31 | — | 2,562 | — | 286 | — | 1,815 | ||||||
| Mean | 86% | 777 | 5 | 47% | 427 | 5% | 48 | 33% | 302 | ||||||
| networked | |||||||||||||||
| 1 | Fall | 2022 | 29% | 224 | 4 | 28% | 222 | 0% | 0 | 0% | 2 | ||||
| 2 | Winter | 2022 | 29% | 360 | 3 | 21% | 270 | 0% | 0 | 7% | 90 | ||||
| 3 | Fall | 2023 | 97% | 1,239 | 13 | 85% | 1,081 | 0% | 0 | 12% | 158 | ||||
| 4 | Winter | 2023 | 93% | 1,190 | 13 | 81% | 1,027 | 5% | 68 | 7% | 95 | ||||
| 5 | Fall | 2024 | 82% | 1,043 | 12 | 82% | 1,043 | 0% | 0 | 0% | 0 | ||||
| 6 | Winter | 2024 | 79% | 990 | 11 | 79% | 990 | 0% | 0 | 0% | 0 | ||||
| Total | — | 5,046 | 56 | — | 4,633 | — | 68 | — | 345 | ||||||
| Mean | 68% | 841 | 9 | 63% | 772 | 1% | 11 | 5% | 58 | ||||||
| Total | — | 9,709 | 87 | — | 7,195 | — | 354 | — | 2,160 | ||||||
| Mean | 77% | 809 | 7 | 55% | 600 | 3% | 30 | 19% | 180 | ||||||
Occasion activity by subset
Days and percentage by activity status for each 3-month occasion, separated by subset (6 occasions, 2022-2025).
5.2 Species richness
Species richness was evaluated for each camera location in the Coastal and Networked subsets using validated image data from the Fall and Winter seasons between September 2022 and February 2025. The analysis summarized richness at individual locations and examined temporal trends across annual, seasonal, and 3-month occasions.
Species summary
A total of nine wild mammal species were detected across all locations. The community was composed of three carnivores, four omnivores, and two herbivores (Wilman et al. 2014). Most locations detected coyote, deer, and wild pig.
| Animal | Binomial | Body size | Locations | |
|---|---|---|---|---|
| Carnivore | Coyote | Canis latrans | M | 22 |
| Bobcat | Lynx rufus | M | 14 | |
| Puma | Puma concolor | L | 5 | |
| Omnivore | Wild pig | Sus scrofa | L | 21 |
| Raccoon | Procyon lotor | M | 9 | |
| Striped skunk | Mephitis mephitis | M | 4 | |
| Gray fox | Urocyon cinereoargenteus | M | 1 | |
| Herbivore | Deer | Odocoileus hemionus | L | 22 |
| Squirrel | Sciurus griseus | S | 2 |
Wildlife species detected
Terrestrial mammal species by diet type and number of locations (out of 24) with at least one detection during Fall and Winter seasons, 2022-2025. Species are ordered by diet type and detection frequency.
Richness by location
Observed species richness ranged from one to six species per location. The mean richness across all 24 locations was three species. On average, Coastal locations detected more species (mean = 3.5, n = 10) than Networked locations (mean = 2.7, n = 14).
Five locations recorded the maximum richness of six species: Percos Tracks, Percos Beach, Black Canyon, Coastal 3, and Coastal 5. Four of these high-richness sites were clustered along the southern boundary of the Preserve, while the Black Canyon site was located on the western edge.
The relationship between camera uptime and species richness was not consistent. For example, some locations with high uptime did not yield the highest richness; Coastal 8 had the greatest uptime but detected four species. Conversely, some locations with relatively low uptime detected moderate richness, such as Damsite (30% uptime, 3 species). In other cases, locations with different uptimes detected similar richness; North Beach (94% uptime) and nearby Coastal 7 (44% uptime) both detected four species. Some locations recorded low richness relative to their operational time; Point C North detected one species with 41% uptime, and Black Canyon Bluff detected one species with 67% uptime.
Map of richness by location
Total species richness by location during Fall and Winter seasons, September 2022-February 2025. Circle size and color intensity correspond to camera uptime percentage. The bold outline indicates a location with the greatest richness. Light gray polygons represent coastal transect cells.
| Subset | Location |
Richness
|
Richness by occasion
|
|||||||
|---|---|---|---|---|---|---|---|---|---|---|
| total | Species detected1 | Mean | Min. | Max. | N | |||||
| Coastal | Coastal 3 | 6 | 4.5 | 4 | 5 | 4 | ||||
| Coastal 5 | 6 | 3.5 | 2 | 6 | 4 | |||||
| Coastal 4 | 5 | 4.5 | 4 | 5 | 2 | |||||
| Coastal 11 | 5 | 3.5 | 2 | 5 | 2 | |||||
| Coastal 9 | 4 | 4.0 | 4 | 4 | 2 | |||||
| Coastal 2 | 4 | 3.5 | 3 | 4 | 2 | |||||
| Coastal 12 | 4 | 3.3 | 3 | 4 | 3 | |||||
| Coastal 8 | 4 | 3.0 | 2 | 4 | 6 | |||||
| Coastal 7 | 4 | 2.8 | 2 | 4 | 4 | |||||
| Coastal 10 | 3 | 2.0 | 1 | 3 | 2 | |||||
| Mean | 4 | 3.5 | 3 | 4 | 3 | |||||
| Maximum | 6 | 4.5 | 4 | 6 | 6 | |||||
| Networked | Percos Tracks | 6 | 4.7 | 4 | 6 | 3 | ||||
| Percos Beach | 6 | 3.7 | 3 | 5 | 6 | |||||
| Black Canyon | 6 | 3.2 | 2 | 4 | 4 | |||||
| East Percos | 5 | 4.0 | 3 | 5 | 4 | |||||
| North Beach | 5 | 3.0 | 2 | 4 | 6 | |||||
| Governments Beach | 5 | 2.8 | 1 | 5 | 4 | |||||
| Governments West | 5 | 2.3 | 1 | 4 | 6 | |||||
| N Vista Spring Canyon | 4 | 3.0 | 2 | 4 | 4 | |||||
| N Vista Spring Bluff | 4 | 2.8 | 2 | 4 | 4 | |||||
| Damsite | 3 | 3.0 | 3 | 3 | 2 | |||||
| Little Cojo | 2 | 2.0 | 2 | 2 | 2 | |||||
| Percos Culvert | 2 | 1.0 | 1 | 1 | 2 | |||||
| Black Canyon Bluff | 1 | 1.0 | 1 | 1 | 4 | |||||
| Point C North | 1 | 1.0 | 1 | 1 | 3 | |||||
| Mean | 4 | 2.7 | 2 | 4 | 4 | |||||
| Maximum | 6 | 4.7 | 4 | 6 | 6 | |||||
| Mean | 4 | 3.0 | 2 | 4 | 4 | |||||
| Maximum | 6 | 4.7 | 4 | 6 | 6 | |||||
| 1 Coyote, Bobcat, Puma, Wild pig, Raccoon, Striped skunk, Gray fox, Deer, Squirrel | ||||||||||
Richness by location
Overall species richness by location for Fall and Winter seasons, 2022-2025. Tiles indicate the detection of carnivore, omnivore, and herbivore species at each location. Summary statistics are provided for mean, minimum, and maximum richness observed across temporal occasions.
Richness by year
Total species richness across all 24 locations varied annually, with an increase in the latter half of the study period. Richness was six species in 2022, five in 2023, nine in 2024, and eight in the shortened 2025 sampling period.
The two camera subsets exhibited different annual trends. Richness at Networked locations increased steadily from four species in 2022 to eight species in 2024 and 2025. In contrast, richness at Coastal locations fluctuated, starting at six species in 2022, decreasing to four in 2023, and peaking at seven in 2024.
In 2024, all nine species were detected, though no single subset detected the full community. That year, the Networked subset detected eight species and the Coastal subset detected seven. The average number of species detected per location was consistently greater for the Coastal subset (mean = 3.8) than the Networked subset (mean = 3.1).
Annual richness trends
Species richness by year for the combined camera network and for each subset. Large circles show the total number of species detected; small open circles represent the richness at individual locations. Years reflect aggregated data from Fall and Winter seasons only.
| Subset | Year |
Richness
|
Richness by year
|
|||||
|---|---|---|---|---|---|---|---|---|
| Total | Mean | Min. | Max. | N | ||||
| All | 2022 | 6 | 3.5 | 2 | 5 | 12 | ||
| 2023 | 5 | 2.8 | 1 | 4 | 19 | |||
| 2024 | 9 | 3.3 | 1 | 6 | 21 | |||
| 2025 | 8 | 3.1 | 1 | 5 | 19 | |||
| Mean | 7 | 3.2 | 1 | 5 | 18 | |||
| Maximum | 9 | 3.5 | 2 | 6 | 21 | |||
Annual richness overview
Number of distinct wildlife species detected each year across all monitoring locations. Years reflect aggregated data from Fall and Winter seasons only.
| Subset | Year |
Richness
|
Richness by year
|
|||||
|---|---|---|---|---|---|---|---|---|
| Total | Mean | Min. | Max. | N | ||||
| Coastal | 2022 | 6 | 3.8 | 2 | 5 | 8 | ||
| 2023 | 4 | 2.6 | 1 | 4 | 7 | |||
| 2024 | 7 | 3.5 | 2 | 6 | 8 | |||
| 2025 | 6 | 3.5 | 2 | 4 | 8 | |||
| Mean | 6 | 3.3 | 2 | 5 | 8 | |||
| Maximum | 7 | 3.8 | 2 | 6 | 8 | |||
| Networked | 2022 | 4 | 3.0 | 2 | 4 | 4 | ||
| 2023 | 5 | 2.9 | 1 | 4 | 12 | |||
| 2024 | 8 | 3.1 | 1 | 5 | 13 | |||
| 2025 | 8 | 2.8 | 1 | 5 | 11 | |||
| Mean | 6 | 3.0 | 1 | 4 | 10 | |||
| Maximum | 8 | 3.1 | 2 | 5 | 13 | |||
| Mean | 6 | 3.2 | 2 | 5 | 9 | |||
| Maximum | 8 | 3.8 | 2 | 6 | 13 | |||
Annual richness by subset
Number of distinct wildlife species detected each year, separated by Coastal and Networked subsets. Years reflect aggregated data from Fall and Winter seasons only.
Richness by season
Overall richness was slightly greater in the Fall (9 species) than in the Winter (8 species). This pattern was also observed in the Coastal subset. The trend was reversed for the Networked subset, which detected more species in the Winter (8 species) than in the Fall (6 species), despite having lower average camera uptime in Winter (60%) compared to Fall (70%).
Seasonal richness comparison
Richness for Fall and Winter seasons between September 2022 and February 2025. Large circles show overall richness; small open circles represent individual location richness.
| Subset | Season |
Richness
|
Richness by season
|
|||||
|---|---|---|---|---|---|---|---|---|
| Total | Mean | Min. | Max. | N | ||||
| All | Fall | 9 | 2.9 | 1 | 5 | 43 | ||
| Winter | 8 | 3.1 | 1 | 6 | 42 | |||
| Mean | 8 | 3.0 | 1 | 6 | 42 | |||
| Maximum | 9 | 3.1 | 1 | 6 | 43 | |||
Seasonal richness overview
Number of distinct wildlife species detected for Fall and Winter seasons across all locations (24 locations, 2022-2025)
| Subset | Season |
Richness
|
Richness by season
|
|||||
|---|---|---|---|---|---|---|---|---|
| Total | Mean | Min. | Max. | N | ||||
| Coastal | Fall | 7 | 3.2 | 1 | 5 | 15 | ||
| Winter | 6 | 3.6 | 2 | 6 | 16 | |||
| Mean | 6 | 3.4 | 2 | 6 | 16 | |||
| Maximum | 7 | 3.6 | 2 | 6 | 16 | |||
| Networked | Fall | 6 | 2.8 | 1 | 4 | 28 | ||
| Winter | 8 | 2.8 | 1 | 6 | 26 | |||
| Mean | 7 | 2.8 | 1 | 5 | 27 | |||
| Maximum | 8 | 2.8 | 1 | 6 | 28 | |||
| Mean | 7 | 3.1 | 1 | 5 | 21 | |||
| Maximum | 8 | 3.6 | 2 | 6 | 28 | |||
Seasonal richness by subset
Number of distinct wildlife species detected for Fall and Winter seasons across all locations (24 locations, 2022-2025)
Richness by occasion
Overall richness peaked during the final two occasions, Fall 2024 and Winter 2024, and was lowest during Fall 2023. The average richness per location was consistently higher in the Coastal subset (mean = 4.0) than the Networked subset (mean = 3.3).
Trends in richness at the occasion level appeared to be influenced by camera uptime, particularly for the Coastal subset. However, during one occasion when only a single Coastal camera was active, it still detected three species. The Networked subset recorded a stable number of species during its first four occasions, even as the number of active cameras tripled.
Richness was often more similar between sequential Fall-Winter occasion pairs (e.g., Fall 2022 and Winter 2022) than among the same seasons in different years (e.g., all Fall occasions). Within these pairs, Networked locations consistently recorded greater richness in Winter than in Fall.
Richness by occasion
Species richness by 3-month occasion for the combined network and individual subsets during Fall and Winter seasons. Large circles show overall richness; small open circles represent individual location richness. Solid lines connect sequential occasions; dashed lines indicate a 6-month gap between Winter and Fall seasons.
| Subset |
Occasion
|
Richness
|
Richness by occasion
|
|||||||
|---|---|---|---|---|---|---|---|---|---|---|
| ID | Season | Year | Total | Mean | Min. | Max. | N | |||
| All | 1 | Fall | 2022 | 5 | 3.3 | 2 | 5 | 12 | ||
| 2 | Winter | 2022 | 7 | 3.1 | 1 | 6 | 10 | |||
| 3 | Fall | 2023 | 4 | 2.8 | 1 | 4 | 13 | |||
| 4 | Winter | 2023 | 5 | 2.5 | 1 | 4 | 13 | |||
| 5 | Fall | 2024 | 8 | 2.7 | 1 | 5 | 18 | |||
| 6 | Winter | 2024 | 8 | 3.6 | 1 | 6 | 19 | |||
| Mean | 6 | 3.0 | 1 | 5 | 14 | |||||
| Maximum | 8 | 3.6 | 2 | 6 | 19 | |||||
Occasion richness overview
Number of distinct wildlife species detected for each 3-month occasion across all locations (24 cameras, 6 occasions, 2022-2025).
| Subset |
Occasion
|
Richness
|
Richness by occasion
|
|||||||
|---|---|---|---|---|---|---|---|---|---|---|
| ID | Season | Year | Total | Mean | Min. | Max. | N | |||
| Coastal | 1 | Fall | 2022 | 5 | 3.5 | 2 | 5 | 8 | ||
| 2 | Winter | 2022 | 6 | 3.3 | 2 | 6 | 7 | |||
| 3 | Fall | 2023 | 3 | 3.0 | 3 | 3 | 1 | |||
| 4 | Winter | 2023 | 3 | 3.0 | 3 | 3 | 1 | |||
| 5 | Fall | 2024 | 6 | 2.8 | 1 | 5 | 6 | |||
| 6 | Winter | 2024 | 6 | 4.0 | 3 | 5 | 8 | |||
| Mean | 5 | 3.3 | 2 | 4 | 5 | |||||
| Maximum | 6 | 4.0 | 3 | 6 | 8 | |||||
| Networked | 1 | Fall | 2022 | 4 | 3.0 | 2 | 4 | 4 | ||
| 2 | Winter | 2022 | 5 | 2.7 | 1 | 4 | 3 | |||
| 3 | Fall | 2023 | 4 | 2.8 | 1 | 4 | 12 | |||
| 4 | Winter | 2023 | 5 | 2.4 | 1 | 4 | 12 | |||
| 5 | Fall | 2024 | 6 | 2.7 | 1 | 4 | 12 | |||
| 6 | Winter | 2024 | 8 | 3.3 | 1 | 6 | 11 | |||
| Mean | 5 | 2.8 | 1 | 4 | 9 | |||||
| Maximum | 8 | 3.3 | 2 | 6 | 12 | |||||
| Mean | 5 | 3.0 | 2 | 4 | 7 | |||||
| Maximum | 8 | 4.0 | 3 | 6 | 12 | |||||
Occasion richness by subset
Number of distinct wildlife species detected for each 3-month occasion, separated by subset (6 occasions, 2022-2025).
5.3 Species occupancy
Species occupancy was evaluated for each camera location in the Coastal and Networked subsets using the validated data collected between September 2022 and February 2025. The analysis summarized naive occupancy, or the proportion of sites where a species was detected, across annual, seasonal, and 3-month (occasion) timescales.
Overall occupancy
Nine mammal species were detected across the 24 locations. Four species were widespread, with naive occupancy values greater than 50%: coyote (Canis latrans) and mule deer (Odocoileus hemionus) were detected at 22 locations (92%), wild pig (Sus scrofa) at 21 locations (88%), and bobcat (Lynx rufus) at 15 locations (63%).
Five species were detected at fewer than 10 locations each. These included raccoon (Procyon lotor; 9 locations), puma (Puma concolor; 5 locations), striped skunk (Mephitis mephitis; 4 locations), western gray squirrel (Sciurus griseus; 2 locations), and gray fox (Urocyon cinereoargenteus; 1 location).
Annual occupancy patterns
The most common species—coyote, wild pig, and deer—were detected at a high proportion of locations every year in both the Coastal and Networked subsets.
Occupancy patterns for carnivores were more variable. Bobcat was detected every year, but its occupancy fluctuated, dropping in the Coastal subset in 2023. Puma had the lowest occupancy among the larger carnivores and was not detected at any location in 2023. Pumas were detected more consistently in the Coastal subset, whereas they were only detected in the Networked subset during the final year of the study.
Differences between the two camera subsets were most apparent for the less-common species. Raccoon was detected during more years in the Coastal subset but at a higher proportion of locations in the Networked subset in 2025. Striped skunk, squirrel, and gray fox were detected more consistently in the Networked subset. Notably, gray fox and squirrel were exclusively detected in the Networked subset.
Annual occupancy trends
Occupancy by year for combined subsets and individually. Species are grouped by diet type and ordered by abundance. Circle size and color intensity increase with occupancy. Years reflect aggregated data from Fall and Winter seasons only.
| Animal |
Occupancy
|
Occupancy by year
|
||||||
|---|---|---|---|---|---|---|---|---|
| Overall | 2022 | 2023 | 2024 | 2025 | ||||
| Carnivore | Coyote | 92% | 100% | 85% | 81% | 84% | ||
| Bobcat | 58% | 58% | 25% | 33% | 37% | |||
| Puma | 21% | 8% | --- | 5% | 21% | |||
| Omnivore | Wild pig | 88% | 67% | 65% | 71% | 47% | ||
| Raccoon | 38% | 17% | --- | 24% | 32% | |||
| Striped skunk | 17% | --- | 5% | 10% | 11% | |||
| Gray fox | 4% | --- | --- | 5% | --- | |||
| Herbivore | Deer | 92% | 100% | 85% | 90% | 74% | ||
| Squirrel | 8% | --- | --- | 10% | 5% | |||
Annual occupancy overview
Number of distinct wildlife species detected each year across all monitoring locations. Years reflect aggregated data from Fall and Winter seasons only.
| Animal |
Occupancy
|
Occupancy by year
|
||||||
|---|---|---|---|---|---|---|---|---|
| Overall | 2022 | 2023 | 2024 | 2025 | ||||
| Coastal | ||||||||
| Carnivore | Coyote | 100% | 100% | 100% | 88% | 88% | ||
| Bobcat | 70% | 62% | 14% | 38% | 38% | |||
| Puma | 30% | 12% | --- | 12% | 25% | |||
| Omnivore | Wild pig | 100% | 75% | 57% | 88% | 88% | ||
| Raccoon | 40% | 25% | --- | 25% | 12% | |||
| Striped skunk | 10% | --- | --- | 12% | --- | |||
| Gray fox | 0% | --- | --- | --- | --- | |||
| Herbivore | Deer | 100% | 100% | 86% | 88% | 100% | ||
| Squirrel | 0% | --- | --- | --- | --- | |||
| Networked | ||||||||
| Carnivore | Coyote | 86% | 100% | 77% | 77% | 82% | ||
| Bobcat | 50% | 50% | 31% | 31% | 36% | |||
| Puma | 14% | --- | --- | --- | 18% | |||
| Omnivore | Wild pig | 79% | 50% | 69% | 62% | 18% | ||
| Raccoon | 36% | --- | --- | 23% | 45% | |||
| Striped skunk | 21% | --- | 8% | 8% | 18% | |||
| Gray fox | 7% | --- | --- | 8% | --- | |||
| Herbivore | Deer | 86% | 100% | 85% | 92% | 55% | ||
| Squirrel | 14% | --- | --- | 15% | 9% | |||
Annual occupancy by subset
Annual occupancy by subset. Number of distinct wildlife species detected each year, separated by Coastal and Networked subsets. Years reflect aggregated data from Fall and Winter seasons only.
Seasonal occupancy patterns
Occupancy for all three detected carnivore species was greater during the Winter season (December-February) than the Fall season (September-November). Puma occupancy, in particular, showed a more than four-fold increase from Fall (4%) to Winter (21%). Raccoon and striped skunk occupancy was also greater in Winter. In contrast, deer and wild pig occupancy was higher in Fall.
The low number of total detections for gray fox, striped skunk, and squirrel made seasonal patterns difficult to assess. Based on the limited data, gray fox was detected only in the Fall, while squirrel occupancy was equal between Fall and Winter.
Seasonal occupancy comparison
Occupancy for for combined and individual subsets for Fall and Winter seasons. Species are grouped by diet type and ordered by abundance. Circle size and color intensity increase with occupancy (2022-2025).
| Animal |
Occupancy
|
Occupancy by season
|
||||
|---|---|---|---|---|---|---|
| Overall | Fall | Winter | ||||
| Carnivore | Coyote | 92% | 83% | 92% | ||
| Bobcat | 58% | 42% | 54% | |||
| Puma | 21% | 4% | 21% | |||
| Omnivore | Wild pig | 88% | 83% | 79% | ||
| Raccoon | 38% | 4% | 33% | |||
| Striped skunk | 17% | 4% | 12% | |||
| Gray fox | 4% | 4% | --- | |||
| Herbivore | Deer | 92% | 92% | 88% | ||
| Squirrel | 8% | 4% | 4% | |||
Seasonal occupancy overview
Number of distinct wildlife species detected for Fall and Winter seasons across all locations (24 locations, 2022-2025)
| Animal |
Occupancy
|
Occupancy by season
|
||||
|---|---|---|---|---|---|---|
| Overall | Fall | Winter | ||||
| Coastal | ||||||
| Carnivore | Coyote | 100% | 90% | 100% | ||
| Bobcat | 70% | 50% | 70% | |||
| Puma | 30% | 10% | 30% | |||
| Omnivore | Wild pig | 100% | 90% | 90% | ||
| Raccoon | 40% | 10% | 30% | |||
| Striped skunk | 10% | 10% | --- | |||
| Gray fox | 0% | --- | --- | |||
| Herbivore | Deer | 100% | 100% | 90% | ||
| Squirrel | 0% | --- | --- | |||
| Networked | ||||||
| Carnivore | Coyote | 86% | 79% | 86% | ||
| Bobcat | 50% | 36% | 43% | |||
| Puma | 14% | --- | 14% | |||
| Omnivore | Wild pig | 79% | 79% | 71% | ||
| Raccoon | 36% | --- | 36% | |||
| Striped skunk | 21% | --- | 21% | |||
| Gray fox | 7% | 7% | --- | |||
| Herbivore | Deer | 86% | 86% | 86% | ||
| Squirrel | 14% | 7% | 7% | |||
Seasonal occupancy by subset
Number of distinct wildlife species detected for Fall and Winter seasons, separated by subset (2022-2025)
Occupancy by 3-month occasion
Direct comparisons of occupancy by 3-month occasions were constrained by a key methodological limitation. Only one Coastal location (Coastal 8) remained active during occasions 3 and 4 (March-August 2024). This inactivity biased occupancy estimates for the Coastal subset during this 6-month period.
The 100% occupancy values for coyote, deer, and wild pig in the Coastal subset during these two occasions reflected their detection at this single active site. The absence of other species may have been an artifact of monitoring effort rather than a true absence. Neither bobcats nor pumas were ever detected at the Coastal 8 location during the study period; therefore, their absence in the Coastal subset during occasions 3 and 4 was an expected outcome of the reduced camera operation.
In the Networked subset, where camera operation was consistent, coyote and deer were detected in every occasion, with minimum occupancy values of 67% and 74%, respectively. Bobcat was also detected consistently across all occasions in the Networked subset.
Occupancy by occasion
Occupancy by 3-month occasion for combined and individual subsets for Fall and Winter seasons. Species are grouped by diet type and ordered by abundance. Circle size and color intensity increase with occupancy (2022-2025).
Occasion occupancy by subset
Occupancy for each species by each 3-month occasion by subset (6 occasions, 2022-2025). Note that only one Coastal location was active during occasions 3 and 4.
| Animal |
Occupancy
|
Occupancy by occasion
|
||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Overall | 1 | 2 | 3 | 4 | 5 | 6 | ||||
| Carnivore | Coyote | 92% | 100% | 90% | 71% | 79% | 67% | 89% | ||
| Bobcat | 58% | 58% | 50% | 29% | 14% | 17% | 47% | |||
| Puma | 21% | --- | 10% | --- | --- | 6% | 21% | |||
| Omnivore | Wild pig | 88% | 67% | 60% | 71% | 43% | 72% | 74% | ||
| Raccoon | 38% | 8% | 10% | --- | 7% | --- | 37% | |||
| Striped skunk | 17% | --- | 10% | --- | --- | 6% | 11% | |||
| Gray fox | 4% | --- | --- | --- | --- | 6% | --- | |||
| Herbivore | Deer | 92% | 100% | 80% | 86% | 86% | 94% | 74% | ||
| Squirrel | 8% | --- | --- | --- | --- | 6% | 5% | |||
Occasion occupancy overview
Overall occupancy for each species by each 3-month occasion across all locations (24 cameras, 6 occasions, 2022-2025)
| Animal |
Occupancy
|
Occupancy by occasion
|
||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Overall | 1 | 2 | 3 | 4 | 5 | 6 | ||||
| Coastal | ||||||||||
| Carnivore | Coyote | 100% | 100% | 86% | 100% | 100% | 50% | 100% | ||
| Bobcat | 70% | 62% | 57% | --- | --- | 17% | 50% | |||
| Puma | 30% | --- | 14% | --- | --- | 17% | 25% | |||
| Omnivore | Wild pig | 100% | 75% | 71% | 100% | 100% | 83% | 100% | ||
| Raccoon | 40% | 12% | 14% | --- | --- | --- | 25% | |||
| Striped skunk | 10% | --- | --- | --- | --- | 17% | --- | |||
| Gray fox | 0% | --- | --- | --- | --- | --- | --- | |||
| Herbivore | Deer | 100% | 100% | 86% | 100% | 100% | 100% | 100% | ||
| Squirrel | 0% | --- | --- | --- | --- | --- | --- | |||
| Networked | ||||||||||
| Carnivore | Coyote | 86% | 100% | 100% | 69% | 77% | 75% | 82% | ||
| Bobcat | 50% | 50% | 33% | 31% | 15% | 17% | 45% | |||
| Puma | 14% | --- | --- | --- | --- | --- | 18% | |||
| Omnivore | Wild pig | 79% | 50% | 33% | 69% | 38% | 67% | 55% | ||
| Raccoon | 36% | --- | --- | --- | 8% | --- | 45% | |||
| Striped skunk | 21% | --- | 33% | --- | --- | --- | 18% | |||
| Gray fox | 7% | --- | --- | --- | --- | 8% | --- | |||
| Herbivore | Deer | 86% | 100% | 67% | 85% | 85% | 92% | 55% | ||
| Squirrel | 14% | --- | --- | --- | --- | 8% | 9% | |||
Occasion occupancy by subset
Occupancy for each species by each 3-month occasion by subset (6 occasions, 2022-2025)
6 Discussion
6.1 Summary of key findings
The analysis of camera performance and biodiversity metrics from the Fall and Winter seasons provided several key insights into the Preserve’s terrestrial mammal community and its monitoring program.
Camera network performance improved substantially over the study period, though operational consistency was a challenge. The newer Networked cameras demonstrated higher uptime once deployed, but the older Coastal array suffered from extended periods of inactivity, particularly in 2023. This inconsistent monitoring effort was a critical factor that complicated direct comparisons of wildlife metrics across years. Addressing the causes of inactivity, especially in the Coastal array, is crucial for the long-term integrity of the monitoring program.
The Preserve supported a mammal community dominated by a core group of widespread, adaptable species. Coyote, deer, and wild pig were present at nearly all locations and were detected consistently across all years and seasons. Their pervasive and persistent presence suggested they were well-established residents that thrived in the habitats across the Preserve. Management of these species should consider their roles as key ecosystem drivers (e.g., herbivory, predation) and the potential for human-wildlife conflict.
A notable disparity existed between the detection of large and small-bodied mammals. While large species were consistently recorded, smaller animals, particularly mesocarnivores like gray fox and striped skunk, were observed infrequently. This pattern was likely confounded by factors that reduced camera performance, such as overgrown vegetation and shifts in the camera’s field of view. These maintenance-related issues can disproportionately affect the detection of smaller animals, potentially masking their true presence. Alternatively, these low detection rates could reflect genuinely low population densities, representing a potential conservation concern that warrants further investigation.
Wildlife activity showed clear seasonal shifts that can inform management. Carnivore occupancy—especially for pumas—increased substantially during the Winter, which may be linked to prey vulnerability, breeding behavior, or changes in human activity. Conversely, the higher Fall occupancy for deer and wild pigs likely corresponded with their respective breeding and foraging seasons. These predictable patterns can help guide the scheduling of management activities, such as restoration work or public access, to minimize disturbance during sensitive periods.
Distinct spatial patterns in biodiversity were observed along the Pacific Coast. Several locations along the southern boundary showed the highest species richness, which suggested that specific habitat features in this area may support a more diverse community. Furthermore, species composition differed between the Coastal and Networked camera subsets; pumas were more associated with the Coastal subset, while striped skunks were more consistently found in the Networked subset. These patterns indicated a potential unaccounted-for difference between the camera arrays, warranting further investigation to understand the source of this variation.
6.2 Limitations and future directions
This report provides a foundational summary based on the available Fall and Winter data. A key limitation was the variability in camera uptime, which could influence biodiversity metrics. While this analysis accounted for uptime where possible, the results should be interpreted with this context in mind. To improve data quality for future analyses, establishing minimum operational thresholds for cameras (e.g., a minimum number of active days per month) is recommended.
The low detection rates for smaller species represented a significant data gap. The camera network was primarily designed to monitor medium-to-large mammals, and operational issues such as vegetation obstruction likely biased detections against smaller animals. This under-representation of a critical prey base limits the ecological inferences that can be drawn from the data. Future efforts should focus on improving the detection of this faunal group through a systematic vegetation management protocol and adjustments to camera placement to better capture ground-level activity.
Several opportunities exist to build upon this work. The analysis was scoped to richness and naive occupancy, but further investigation into metrics like image count and detection rates would provide a more nuanced understanding of wildlife activity. These analyses would be greatly enhanced by incorporating data from the Spring and Summer seasons to provide a complete, year-round picture of mammal ecology at the Preserve.
The visualizations and tables presented in this report were comprehensive to offer a range of options for data presentation. Future work could involve collaborating with Preserve managers to refine these into a core suite of visualizations that best meets their ongoing monitoring needs. There is also potential to develop interactive dashboards that would allow managers to explore the data dynamically.
7 Conclusion
This project successfully established a foundational workflow for analyzing the Preserve’s extensive wildlife camera data archive. The initial, critical phase of the work involved a comprehensive scoping process that inventoried, standardized, and assessed all existing image data. This process transformed a large and heterogeneous collection of data sets into a single, analysis-ready format. The subsequent compatibility and coverage assessments were essential for identifying a reliable data subset, which ultimately defined the analytical scope for this report: an analysis of camera performance, species richness, and naïve occupancy for 24 locations during the Fall and Winter seasons from 2022 to 2025.
The analysis of this validated subset provided the first multi-year, quantitative baseline of the terrestrial mammal community along the Preserve’s coast. Key findings revealed improvements in camera network performance over time, identified a core group of dominant species such as coyote and mule deer, and uncovered distinct spatial and seasonal patterns in biodiversity. For example, the data highlighted potential biodiversity hotspots and showed that carnivore occupancy, particularly for pumas, increased during Winter months. These insights, while significant, represent a partial view of the ecological dynamics at the Preserve, as they were derived from the portion of the data deemed ready for immediate analysis.
The scoping phase also identified crucial data gaps, most notably the large volume of unvalidated image data from the Spring and Summer seasons. The exclusion of these seasons from the current analysis underscores a key limitation and highlights the most critical next step. To build upon this foundational work, priority must be given to completing the image validation for these remaining seasons, as recommended in the scoping assessment. Fulfilling this recommendation will unlock the full potential of the data set, enabling a comprehensive, year-round analysis of wildlife activity.
Ultimately, this project delivered more than a set of baseline biodiversity metrics; it created a repeatable, scalable process for data management and analysis. By standardizing the historical data and establishing a clear path forward, this work provides the necessary framework for developing a robust, long-term wildlife monitoring program at the Preserve.
8 References
Change log
2025-09-17
- Revised parse_json() to handle trailing space in “Damsite” location name
- Added and configured {renv}
- Removed broken link to Camera data section in Methods
2025-09-09
- Initial draft for review
Session information
R version 4.5.0 (2025-04-11)
Platform: aarch64-apple-darwin20
Running under: macOS Sequoia 15.6.1
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.1
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
time zone: America/Los_Angeles
tzcode source: internal
attached base packages:
[1] stats graphics grDevices datasets utils methods base
other attached packages:
[1] yaml_2.3.10 webshot2_0.1.2 vroom_1.6.5 viridis_0.6.5
[5] viridisLite_0.4.2 tidyr_1.3.1 tibble_3.3.0 testthat_3.2.3
[9] terra_1.8-60 stringr_1.5.2 snakecase_0.11.1 sf_1.0-21
[13] sessioninfo_1.2.3 scales_1.4.0 renv_1.1.5 readxl_1.4.5
[17] readr_2.1.5 RColorBrewer_1.1-3 ragg_1.5.0 purrr_1.1.0
[21] pdftools_3.6.0 patchwork_1.3.2 marquee_1.2.1 lubridate_1.9.4
[25] leaflet_2.2.3 leaflegend_1.2.1 knitr_1.50 kableExtra_1.4.0
[29] jsonlite_2.0.0 janitor_2.2.1 here_1.0.2 gtsummary_2.4.0
[33] gt_1.0.0 glue_1.8.0 ggplot2_4.0.0 ggfittext_0.10.2
[37] fuzzyjoin_0.1.6.1 fs_1.6.6 forcats_1.0.0 DT_0.34.0
[41] dplyr_1.1.4 cowplot_1.2.0 colorspace_2.1-1 checkmate_2.3.3
loaded via a namespace (and not attached):
[1] DBI_1.2.3 gridExtra_2.3 rlang_1.1.6
[4] magrittr_2.0.4 e1071_1.7-16 compiler_4.5.0
[7] systemfonts_1.2.3 vctrs_0.6.5 pkgconfig_2.0.3
[10] crayon_1.5.3 fastmap_1.2.0 backports_1.5.0
[13] labeling_0.4.3 promises_1.3.3 rmarkdown_2.29
[16] tzdb_0.5.0 ps_1.9.1 bit_4.6.0
[19] xfun_0.53 cachem_1.1.0 shades_1.4.0
[22] later_1.4.4 parallel_4.5.0 R6_2.6.1
[25] bslib_0.9.0 stringi_1.8.7 jquerylib_0.1.4
[28] brio_1.1.5 cellranger_1.1.0 Rcpp_1.1.0
[31] base64enc_0.1-3 leaflet.providers_2.0.0 timechange_0.3.0
[34] tidyselect_1.2.1 rstudioapi_0.17.1 codetools_0.2-20
[37] websocket_1.4.4 processx_3.8.6 qpdf_1.4.1
[40] withr_3.0.2 S7_0.2.0 askpass_1.2.1
[43] evaluate_1.0.5 units_0.8-7 proxy_0.4-27
[46] xml2_1.4.0 pillar_1.11.0 KernSmooth_2.23-26
[49] generics_0.1.4 rprojroot_2.1.1 chromote_0.5.1
[52] hms_1.1.3 class_7.3-23 tools_4.5.0
[55] grid_4.5.0 crosstalk_1.2.2 cli_3.6.5
[58] textshaping_1.0.3 svglite_2.2.1 gtable_0.3.6
[61] sass_0.4.10 digest_0.6.37 classInt_0.4-11
[64] htmlwidgets_1.6.4 farver_2.1.2 htmltools_0.5.8.1
[67] lifecycle_1.0.4 bit64_4.6.0-1